19 resultados para CLASSIFICATION RULES
em Helda - Digital Repository of University of Helsinki
Resumo:
In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.
Resumo:
This study highlights the formation of an artifact designed to mediate exploratory collaboration. The data for this study was collected during a Finnish adaptation of the thinking together approach. The aim of the approach is to teach pulps how to engage in educationally beneficial form of joint discussion, namely exploratory talk. At the heart of the approach lies a set of conversational ground rules aimed to promote the use of exploratory talk. The theoretical framework of the study is based on a sociocultural perspective on learning. A central argument in the framework is that physical and psychological tools play a crucial role in human action and learning. With the help of tools humans can escape the direct stimulus of the outside world and learn to control ourselves by using tools. During the implementation of the approach, the classroom community negotiates a set of six rules, which this study conceptualizes as an artifact that mediates exploratory collaboration. Prior research done about the thinking together approach has not extensively researched the formation of the rules, which give ample reason to conduct this study. The specific research questions asked were: What kind of negotiation trajectories did the ground rules form during the intervention? What meanings were negotiated for the ground rules during the intervention The methodological framework of the study is based on discourse analysis, which has been specified by adapting the social construction of intertextuality to analyze the meanings negotiated for the created rules. The study has town units of analysis: thematic episode and negotiation trajectory. A thematic episode is a stretch of talk-in-interaction where the participants talk about a certain ground rule or a theme relating to it. A negotiation trajectory is a chronological representation of the negotiation process of a certain ground rule during the intervention and is constructed of thematic episodes. Thematic episodes were analyzed with the adapted intertextuality analysis. A contrastive analysis was done on the trajectories. Lastly, the meanings negotiated for the created rules were compared to the guidelines provided by the approach. The main result of the study is the observation, that the meanings of the created rules were more aligned with the ground rules of cumulative talk, rather than exploratory talk. Although meanings relating also to exploratory talk were negotiated, they clearly were not the dominant form. In addition, the study observed that the trajectories of the rules were non identical. Despite connecting dimensions (symmetry, composition, continuity and explicitness) none of the trajectories shared exactly the same features as the others.
Resumo:
This thesis utilises an evidence-based approach to critically evaluate and summarize effectiveness research on physiotherapy, physiotherapy-related motor-based interventions and orthotic devices in children and adolescents with cerebral palsy (CP). It aims to assess the methodological challenges of the systematic reviews and trials, to evaluate the effectiveness of interventions in current use, and to make suggestions for future trials Methods: Systematic reviews were searched from computerized bibliographic databases up to August 2007 for physiotherapy and physiotherapy-related interventions, and up to May 2003 for orthotic devices. Two reviewers independently identified, selected, and assessed the quality of the reviews using the Overview Quality Assessment Questionnaire complemented with decision rules. From a sample of 14 randomized controlled trials (RCT) published between January 1990 and June 2003 we analysed the methods of sampling, recruitment, and comparability of groups; defined the components of a complex intervention; identified outcome measures based on the International Classification of Functioning, Disability and Health (ICF); analysed the clinical interpretation of score changes; and analysed trial reporting using a modified 33-item CONSORT (Consolidated Standards of Reporting Trials) checklist. The effectiveness of physiotherapy and physiotherapy-related interventions in children with diagnosed CP was evaluated in a systematic review of randomised controlled trials that were searched from computerized databases from January 1990 up to February 2007. Two reviewers independently assessed the methodological quality, extracted the data, classified the outcomes using the ICF, and considered the level of evidence according to van Tulder et al. (2003). Results: We identified 21 reviews on physiotherapy and physiotherapy-related interventions and five on orthotic devices. These reviews summarized 23 or 5 randomised controlled trials and 104 or 27 observational studies, respectively. Only six reviews were of high quality. These found some evidence supporting strength training, constraint-induced movement therapy or hippotherapy, and insufficient evidence on comprehensive interventions. Based on the original studies included in the reviews on orthotic devices we found some short-term effects of lower limb casting on passive range of movement, and of ankle-foot orthoses on equinus walk. Long term effects of lower limb orthoses have not been studied. Evidence of upper limb casting or orthoses is conflicting. In the sample of 14 RCTs, most trials used simple randomisation, complemented with matching or stratification, but only three specified the concealed allocation. Numerous studies provided sufficient details on the components of a complex intervention, but the overlap of outcome measures across studies was poor and the clinical interpretation of observed score changes was mostly missing. Almost half (48%) of the applicable CONSORT-based items (range 28 32) were reported adequately. Most reporting inadequacies were in outcome measures, sample size determination, details of the sequence generation, allocation concealment and implementation of the randomization, success of assessor blinding, recruitment and follow-up dates, intention-to-treat analysis, precision of the effect size, co-interventions, and adverse events. The systematic review identified 22 trials on eight intervention categories. Four trials were of high quality. Moderate evidence of effectiveness was established for upper extremity treatments on attained goals, active supination and developmental status, and of constraint-induced therapy on the amount and quality of hand use and new emerging behaviours. Moderate evidence of ineffectiveness was found for strength training's effect on walking speed and stride length. Conflicting evidence was found for strength training's effect on gross motor function. For the other intervention categories the evidence was limited due to the low methodological quality and the statistically insignificant results of the studies. Conclusions: The high-quality reviews provide both supportive and insufficient evidence on some physiotherapy interventions. The poor quality of most reviews calls for caution, although most reviews drew no conclusions on effectiveness due to the poor quality of the primary studies. A considerable number of RCTs of good to fair methodological and reporting quality indicate that informative and well-reported RCTs on complex interventions in children and adolescents with CP are feasible. Nevertheless, methodological improvement is needed in certain areas of the trial design and performance, and the trial authors are encouraged to follow the CONSORT criteria. Based on RCTs we established moderate evidence for some effectiveness of upper extremity training. Due to limitations in methodological quality and variations in population, interventions and outcomes, mostly limited evidence on the effectiveness of most physiotherapy interventions is available to guide clinical practice. Well-designed trials are needed, especially for focused physiotherapy interventions.
Resumo:
Hereditary nonpolyposis colorectal cancer (HNPCC) is the most common known clearly hereditary cause of colorectal and endometrial cancer (CRC and EC). Dominantly inherited mutations in one of the known mismatch repair (MMR) genes predispose to HNPCC. Defective MMR leads to an accumulation of mutations especially in repeat tracts, presenting microsatellite instability. HNPCC is clinically a very heterogeneous disease. The age at onset varies and the target tissue may vary. In addition, families that fulfill the diagnostic criteria for HNPCC but fail to show any predisposing mutation in MMR genes exist. Our aim was to evaluate the genetic background of familial CRC and EC. We performed comprehensive molecular and DNA copy number analyses of CRCs fulfilling the diagnostic criteria for HNPCC. We studied the role of five pathways (MMR, Wnt, p53, CIN, PI3K/AKT) and divided the tumors into two groups, one with MMR gene germline mutations and the other without. We observed that MMR proficient familial CRC consist of two molecularly distinct groups that differ from MMR deficient tumors. Group A shows paucity of common molecular and chromosomal alterations characteristic of colorectal carcinogenesis. Group B shows molecular features similar to classical microsatellite stable tumors with gross chromosomal alterations. Our finding of a unique tumor profile in group A suggests the involvement of novel predisposing genes and pathways in colorectal cancer cohorts not linked to MMR gene defects. We investigated the genetic background of familial ECs. Among 22 families with clustering of EC, two (9%) were due to MMR gene germline mutations. The remaining familial site-specific ECs are largely comparable with HNPCC associated ECs, the main difference between these groups being MMR proficiency vs. deficiency. We studied the role of PI3K/AKT pathway in familial ECs as well and observed that PIK3CA amplifications are characteristic of familial site-specific EC without MMR gene germline mutations. Most of the high-level amplifications occurred in tumors with stable microsatellites, suggesting that these tumors are more likely associated with chromosomal rather than microsatellite instability and MMR defect. The existence of site-specific endometrial carcinoma as a separate entity remains equivocal until predisposing genes are identified. It is possible that no single highly penetrant gene for this proposed syndrome exists, it may, for example be due to a combination of multiple low penetrance genes. Despite advances in deciphering the molecular genetic background of HNPCC, it is poorly understood why certain organs are more susceptible than others to cancer development. We found that important determinants of the HNPCC tumor spectrum are, in addition to different predisposing germline mutations, organ specific target genes and different instability profiles, loss of heterozygosity at MLH1 locus, and MLH1 promoter methylation. This study provided more precise molecular classification of families with CRC and EC. Our observations on familial CRC and EC are likely to have broader significance that extends to sporadic CRC and EC as well.
Resumo:
A new rock mass classification scheme, the Host Rock Classification system (HRC-system) has been developed for evaluating the suitability of volumes of rock mass for the disposal of high-level nuclear waste in Precambrian crystalline bedrock. To support the development of the system, the requirements of host rock to be used for disposal have been studied in detail and the significance of the various rock mass properties have been examined. The HRC-system considers both the long-term safety of the repository and the constructability in the rock mass. The system is specific to the KBS-3V disposal concept and can be used only at sites that have been evaluated to be suitable at the site scale. By using the HRC-system, it is possible to identify potentially suitable volumes within the site at several different scales (repository, tunnel and canister scales). The selection of the classification parameters to be included in the HRC-system is based on an extensive study on the rock mass properties and their various influences on the long-term safety, the constructability and the layout and location of the repository. The parameters proposed for the classification at the repository scale include fracture zones, strength/stress ratio, hydraulic conductivity and the Groundwater Chemistry Index. The parameters proposed for the classification at the tunnel scale include hydraulic conductivity, Q´ and fracture zones and the parameters proposed for the classification at the canister scale include hydraulic conductivity, Q´, fracture zones, fracture width (aperture + filling) and fracture trace length. The parameter values will be used to determine the suitability classes for the volumes of rock to be classified. The HRC-system includes four suitability classes at the repository and tunnel scales and three suitability classes at the canister scale and the classification process is linked to several important decisions regarding the location and acceptability of many components of the repository at all three scales. The HRC-system is, thereby, one possible design tool that aids in locating the different repository components into volumes of host rock that are more suitable than others and that are considered to fulfil the fundamental requirements set for the repository host rock. The generic HRC-system, which is the main result of this work, is also adjusted to the site-specific properties of the Olkiluoto site in Finland and the classification procedure is demonstrated by a test classification using data from Olkiluoto. Keywords: host rock, classification, HRC-system, nuclear waste disposal, long-term safety, constructability, KBS-3V, crystalline bedrock, Olkiluoto
Resumo:
Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.
Resumo:
Economic and Monetary Union can be characterised as a complicated set of legislation and institutions governing monetary and fiscal responsibilities. The measures of fiscal responsibility are to be guided by the Stability and Growth Pact, which sets rules for fiscal policy and makes a discretionary fiscal policy virtually impossible. To analyse the effects of the fiscal and monetary policy mix, we modified the New Keynesian framework to allow for supply effects of fiscal policy. We show that defining a supply-side channel for fiscal policy using an endogenous output gap changes the stabilising properties of monetary policy rules. The stability conditions are affected by fiscal policy, so that the dichotomy between active (passive) monetary policy and passive (active) fiscal policy as stabilising regimes does not hold, and it is possible to have an active monetary - active fiscal policy regime consistent with dynamical stability of the economy. We show that, if we take supply-side effects into ac-count, we get more persistent inflation and output reactions. We also show that the dichotomy does not hold for a variety of different fiscal policy rules based on government debt and budget deficit, using the tax smoothing hypothesis and formulating the tax rules as difference equations. The debt rule with active monetary policy results in indeterminacy, while the deficit rule produces a determinate solution with active monetary policy, even with active fiscal policy. The combination of fiscal requirements in a rule results in cyclical responses to shocks. The amplitude of the cycle is larger with more weight on debt than on deficit. Combining optimised monetary policy with fiscal policy rules means that, under a discretionary monetary policy, the fiscal policy regime affects the size of the inflation bias. We also show that commitment to an optimal monetary policy not only corrects the inflation bias but also increases the persistence of output reactions. With fiscal policy rules based on the deficit we can retain the tax smoothing hypothesis also in a sticky price model.
Resumo:
Plasma membrane adopts myriad of different shapes to carry out essential cellular processes such as nutrient uptake, immunological defence mechanisms and cell migration. Therefore, the details how different plasma membrane structures are made and remodelled are of the upmost importance. Bending of plasma membrane into different shapes requires substantial amount of force, which can be provided by the actin cytoskeleton, however, the molecules that regulate the interplay between the actin cytoskeleton and plasma membrane have remained elusive. Recent findings have placed new types of effectors at sites of plasma membrane remodelling, including BAR proteins, which can directly bind and deform plasma membrane into different shapes. In addition to their membrane-bending abilities, BAR proteins also harbor protein domains that intimately link them to the actin cytoskeleton. The ancient BAR domain fold has evolved into at least three structurally and functionally different sub-groups: the BAR, F-BAR and I-BAR domains. This thesis work describes the discovery and functional characterization of the Inverse-BAR domains (I-BARs). Using synthetic model membranes, we have shown that I-BAR domains bind and deform membranes into tubular structures through a binding-surface composed of positively charged amino acids. Importantly, the membrane-binding surface of I-BAR domains displays an inverse geometry to that of the BAR and F-BAR domains, and these structural differences explain why I-BAR domains induce cell protrusions whereas BAR and most F-BAR domains induce cell invaginations. In addition, our results indicate that the binding of I-BAR domains to membranes can alter the spatial organization of phosphoinositides within membranes. Intriguingly, we also found that some I-BAR domains can insert helical motifs into the membrane bilayer, which has important consequences for their membrane binding/bending functions. In mammals there are five I-BAR domain containing proteins. Cell biological studies on ABBA revealed that it is highly expressed in radial glial cells during the development of the central nervous system and plays an important role in the extension process of radial glia-like C6R cells by regulating lamellipodial dynamics through its I-BAR domain. To reveal the role of these proteins in the context of animals, we analyzed MIM knockout mice and found that MIM is required for proper renal functions in adult mice. MIM deficient mice displayed a severe urine concentration defect due to defective intercellular junctions of the kidney epithelia. Consistently, MIM localized to adherens junctions in cultured kidney epithelial cells, where it promoted actin assembly through its I-BAR andWH2 domains. In summary, this thesis describes the mechanism how I-BAR proteins deform membranes and provides information about the biological role of these proteins, which to our knowledge are the first proteins that have been shown to directly deform plasma membrane to make cell protrusions.
Resumo:
Climate change contributes directly or indirectly to changes in species distributions, and there is very high confidence that recent climate warming is already affecting ecosystems. The Arctic has already experienced the greatest regional warming in recent decades, and the trend is continuing. However, studies on the northern ecosystems are scarce compared to more southerly regions. Better understanding of the past and present environmental change is needed to be able to forecast the future. Multivariate methods were used to explore the distributional patterns of chironomids in 50 shallow (≤ 10m) lakes in relation to 24 variables determined in northern Fennoscandia at the ecotonal area from the boreal forest in the south to the orohemiarctic zone in the north. Highest taxon richness was noted at middle elevations around 400 m a.s.l. Significantly lower values were observed from cold lakes situated in the tundra zone. Lake water alkalinity had the strongest positive correlation with the taxon richness. Many taxa had preference for lakes either on tundra area or forested area. The variation in the chironomid abundance data was best correlated with sediment organic content (LOI), lake water total organic carbon content, pH and air temperature, with LOI being the strongest variable. Three major lake groups were separated on the basis of their chironomid assemblages: (i) small and shallow organic-rich lakes, (ii) large and base-rich lakes, and (iii) cold and clear oligotrophic tundra lakes. Environmental variables best discriminating the lake groups were LOI, taxon richness, and Mg. When repeated, this kind of an approach could be useful and efficient in monitoring the effects of global change on species ranges. Many species of fast spreading insects, including chironomids, show a remarkable ability to track environmental changes. Based on this ability, past environmental conditions have been reconstructed using their chitinous remains in the lake sediment profiles. In order to study the Holocene environmental history of subarctic aquatic systems, and quantitatively reconstruct the past temperatures at or near the treeline, long sediment cores covering the last 10000 years (the Holocene) were collected from three lakes. Lower temperature values than expected based on the presence of pine in the catchment during the mid-Holocene were reconstructed from a lake with great water volume and depth. The lake provided thermal refuge for profundal, cold adapted taxa during the warm period. In a shallow lake, the decrease in the reconstructed temperatures during the late Holocene may reflect the indirect response of the midges to climate change through, e.g., pH change. The results from three lakes indicated that the response of chironomids to climate have been more or less indirect. However, concurrent shifts in assemblages of chironomids and vegetation in two lakes during the Holocene time period indicated that the midges together with the terrestrial vegetation had responded to the same ultimate cause, which most likely was the Holocene climate change. This was also supported by the similarity in the long-term trends in faunal succession for the chironomid assemblages in several lakes in the area. In northern Finnish Lapland the distribution of chironomids were significantly correlated with physical and limnological factors that are most likely to change as a result of future climate change. The indirect and individualistic response of aquatic systems, as reconstructed using the chironomid assemblages, to the climate change in the past suggests that in the future, the lake ecosystems in the north do not respond in one predictable way to the global climate change. Lakes in the north may respond to global climate change in various ways that are dependent on the initial characters of the catchment area and the lake.
Resumo:
Pragmatism has sometimes been taken as a catchphrase for epistemological stances in which anything goes. However, other authors argue that the real novelty and contribution of this tradition has to do with its view of action as the context in which all things human take place. Thus, it is action rather than, for example, discourses that should be our starting point in social theory. The introductory section of the book situates pragmatism (especially the ideas of G. H. Mead and John Dewey) within the field and tradition of social theory. This introductory also contextualizes the main core of the book which consists of four chapters. Two of these chapters have been published as articles in scientific journals and one in an edited book. All of them discuss the core problem of social theory: how is action related to social structures (and vice versa)? The argument is that habitual action is the explanation for the emergence of social structures from our action. Action produces structures and social reproduction takes place when action is habitualized; that is, when we develop social dispositions to act in a certain manner in familiar environments. This also means that even though the physical environment is the same for all of us, our habits structure it into different kinds of action possibilities. Each chapter highlights these general insights from different angles. Practice theory has gained momentum in recent years and it has many commonalities with pragmatism because both highlight the situated and corporeal character of human activity. One famous proponent of practice theory is Margaret Archer who has argued that the pragmatism of G. H. Mead leads to an oversocialized conception of selfhood. Mead does indeed present a socialized view of selfhood but this is a meta-sociological argument rather than a substantial sociological claim. Accordingly, one can argue that in this general sense intersubjectivity precedes subjectivity and not the other way around. Such a view does not indicate that our social relation would necessarily "colonize" individual action because there is a place for internal conversations (in Archer s terminology); it is especially in those phases of action where it meets obstacles due to the changes of the environment. The second issue discussed has the background assumption that social structures can fruitfully be conceptualized as institutions. A general classification of different institution theories is presented and it is argued that there is a need for a habitual theory of institutions due to the problems associated with these other theories. So-called habitual institutionalism accounts for institutions in terms of established and prevalent social dispositions that structure our social interactions. The germs of this institution theory can be found in the work of Thorstein Veblen. Since Veblen s times, these ideas have been discussed for example, by the economist Geoffrey M. Hodgson. His ideas on the evolution of institutions are presented but a critical stance is taken towards his tendency of defining institutions with the help of rules because rules are not always present in institutions. Accordingly, habitual action is the most basic but by no means the only aspect of institutional reproduction. The third chapter deals with theme of action and structures in the context of Pierre Bourdieu s thought. Bourdieu s term habitus refers to a system of dispositions which structure social fields. It is argued that habits come close to the concept of habitus in the sense that the latter consists of particular kinds of habits; those that are related to the reproduction of socioeconomic positions. Habits are thus constituents of a general theory of societal reproduction whereas habitus is a systematic combination of socioeconomic habits. The fourth theme relates to issues of social change and development. The capabilities approach has been associated with the name of Amartya Sen, for example, and it underscores problems inhering in economistic ways of evaluating social development. However, Sen s argument has some theoretical problems. For example, his theory cannot adequately confront the problem of relativism. In addition, Sen s discussion lacks also a theory of the role of the public. With the help of arguments derived from pragmatism, one gets an action-based, socially constituted view of freedom in which the role of the public is essential. In general, it is argued that a socially constituted view of agency does not necessarily to lead to pessimistic conclusions about the freedom of action.
Resumo:
A new classification and linear sequence of the gymnosperms based on previous molecular and morphological phylogenetic and other studies is presented. Currently accepted genera are listed for each family and arranged according to their (probable) phylogenetic position. A full synonymy is provided, and types are listed for accepted genera. An index to genera assists in easy access to synonymy and family placement of genera.