963 resultados para indifference graph


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Strikes provide a current, fresh but also a seldom-addressed issue to study from economic sciences perspective. This study provides to filling this research gap by trying to identify attitudes towards strikes that can be found inside organizations. The research problem this study then sets out to answer is: “What kinds of attitudes exist inside organizations towards industrial actions and how attitudes vary between labour, management and human resources?” This study has been planned with a view to test how qualitative attitudinal research, as a method, is suited to studying a phenomenon such as strike. At the heart of this research approach lies an assumption linked to rhetoric social psychology, that attitude is a phenomenon that can be identified in argumentation. For this research 10 semi-structured interviews in 4 organizations were conducted utilizing statements and pictures as stimulants for discussion. The material was transcribed and analysed following the two levels, categorical and interpretive, demanded by the chosen method. Altogether five attitudes were discovered; three of them negative, one indifferent and one positive by nature. The negative attitudes of unfairness, failure and personification towards strikes represented the side of strikes that was perhaps the most anticipated, portraying the contradictions between employees and employer. The attitude of ordinariness, which portrayed indifference, and the positive attitude of change however, were more unanticipated findings. They reflect shared understanding and trust between conflict parties. The utilization of qualitative attitudinal approach to study strikes was deemed successful. The results of this study support prior literature on workplace conflicts for example in regards of the definition of conflict and typologies conflicts. In addition the multifaceted nature of strikes can be perceived as one statement supported by this study. It arises in the nature of the attitudes, the diversity of discussion themes during the interviews as well as in the extent of possible theories to apply.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biomedical natural language processing (BioNLP) is a subfield of natural language processing, an area of computational linguistics concerned with developing programs that work with natural language: written texts and speech. Biomedical relation extraction concerns the detection of semantic relations such as protein-protein interactions (PPI) from scientific texts. The aim is to enhance information retrieval by detecting relations between concepts, not just individual concepts as with a keyword search. In recent years, events have been proposed as a more detailed alternative for simple pairwise PPI relations. Events provide a systematic, structural representation for annotating the content of natural language texts. Events are characterized by annotated trigger words, directed and typed arguments and the ability to nest other events. For example, the sentence “Protein A causes protein B to bind protein C” can be annotated with the nested event structure CAUSE(A, BIND(B, C)). Converted to such formal representations, the information of natural language texts can be used by computational applications. Biomedical event annotations were introduced by the BioInfer and GENIA corpora, and event extraction was popularized by the BioNLP'09 Shared Task on Event Extraction. In this thesis we present a method for automated event extraction, implemented as the Turku Event Extraction System (TEES). A unified graph format is defined for representing event annotations and the problem of extracting complex event structures is decomposed into a number of independent classification tasks. These classification tasks are solved using SVM and RLS classifiers, utilizing rich feature representations built from full dependency parsing. Building on earlier work on pairwise relation extraction and using a generalized graph representation, the resulting TEES system is capable of detecting binary relations as well as complex event structures. We show that this event extraction system has good performance, reaching the first place in the BioNLP'09 Shared Task on Event Extraction. Subsequently, TEES has achieved several first ranks in the BioNLP'11 and BioNLP'13 Shared Tasks, as well as shown competitive performance in the binary relation Drug-Drug Interaction Extraction 2011 and 2013 shared tasks. The Turku Event Extraction System is published as a freely available open-source project, documenting the research in detail as well as making the method available for practical applications. In particular, in this thesis we describe the application of the event extraction method to PubMed-scale text mining, showing how the developed approach not only shows good performance, but is generalizable and applicable to large-scale real-world text mining projects. Finally, we discuss related literature, summarize the contributions of the work and present some thoughts on future directions for biomedical event extraction. This thesis includes and builds on six original research publications. The first of these introduces the analysis of dependency parses that leads to development of TEES. The entries in the three BioNLP Shared Tasks, as well as in the DDIExtraction 2011 task are covered in four publications, and the sixth one demonstrates the application of the system to PubMed-scale text mining.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clinical decision support systems are useful tools for assisting physicians to diagnose complex illnesses. Schizophrenia is a complex, heterogeneous and incapacitating mental disorder that should be detected as early as possible to avoid a most serious outcome. These artificial intelligence systems might be useful in the early detection of schizophrenia disorder. The objective of the present study was to describe the development of such a clinical decision support system for the diagnosis of schizophrenia spectrum disorders (SADDESQ). The development of this system is described in four stages: knowledge acquisition, knowledge organization, the development of a computer-assisted model, and the evaluation of the system's performance. The knowledge was extracted from an expert through open interviews. These interviews aimed to explore the expert's diagnostic decision-making process for the diagnosis of schizophrenia. A graph methodology was employed to identify the elements involved in the reasoning process. Knowledge was first organized and modeled by means of algorithms and then transferred to a computational model created by the covering approach. The performance assessment involved the comparison of the diagnoses of 38 clinical vignettes between an expert and the SADDESQ. The results showed a relatively low rate of misclassification (18-34%) and a good performance by SADDESQ in the diagnosis of schizophrenia, with an accuracy of 66-82%. The accuracy was higher when schizophreniform disorder was considered as the presence of schizophrenia disorder. Although these results are preliminary, the SADDESQ has exhibited a satisfactory performance, which needs to be further evaluated within a clinical setting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advancement of science and technology makes it clear that no single perspective is any longer sufficient to describe the true nature of any phenomenon. That is why the interdisciplinary research is gaining more attention overtime. An excellent example of this type of research is natural computing which stands on the borderline between biology and computer science. The contribution of research done in natural computing is twofold: on one hand, it sheds light into how nature works and how it processes information and, on the other hand, it provides some guidelines on how to design bio-inspired technologies. The first direction in this thesis focuses on a nature-inspired process called gene assembly in ciliates. The second one studies reaction systems, as a modeling framework with its rationale built upon the biochemical interactions happening within a cell. The process of gene assembly in ciliates has attracted a lot of attention as a research topic in the past 15 years. Two main modelling frameworks have been initially proposed in the end of 1990s to capture ciliates’ gene assembly process, namely the intermolecular model and the intramolecular model. They were followed by other model proposals such as templatebased assembly and DNA rearrangement pathways recombination models. In this thesis we are interested in a variation of the intramolecular model called simple gene assembly model, which focuses on the simplest possible folds in the assembly process. We propose a new framework called directed overlap-inclusion (DOI) graphs to overcome the limitations that previously introduced models faced in capturing all the combinatorial details of the simple gene assembly process. We investigate a number of combinatorial properties of these graphs, including a necessary property in terms of forbidden induced subgraphs. We also introduce DOI graph-based rewriting rules that capture all the operations of the simple gene assembly model and prove that they are equivalent to the string-based formalization of the model. Reaction systems (RS) is another nature-inspired modeling framework that is studied in this thesis. Reaction systems’ rationale is based upon two main regulation mechanisms, facilitation and inhibition, which control the interactions between biochemical reactions. Reaction systems is a complementary modeling framework to traditional quantitative frameworks, focusing on explicit cause-effect relationships between reactions. The explicit formulation of facilitation and inhibition mechanisms behind reactions, as well as the focus on interactions between reactions (rather than dynamics of concentrations) makes their applicability potentially wide and useful beyond biological case studies. In this thesis, we construct a reaction system model corresponding to the heat shock response mechanism based on a novel concept of dominance graph that captures the competition on resources in the ODE model. We also introduce for RS various concepts inspired by biology, e.g., mass conservation, steady state, periodicity, etc., to do model checking of the reaction systems based models. We prove that the complexity of the decision problems related to these properties varies from P to NP- and coNP-complete to PSPACE-complete. We further focus on the mass conservation relation in an RS and introduce the conservation dependency graph to capture the relation between the species and also propose an algorithm to list the conserved sets of a given reaction system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Baltic Sea is a unique environment that contains unique genetic populations. In order to study these populations on a genetic level basic molecular research is needed. The aim of this thesis was to provide a basic genetic resource for population genomic studies by de novo assembling a transcriptome for the Baltic Sea isopod Idotea balthica. RNA was extracted from a whole single adult male isopod and sequenced using Illumina (125bp PE) RNA-Seq. The reads were preprocessed using FASTQC for quality control, TRIMMOMATIC for trimming, and RCORRECTOR for error correction. The preprocessed reads were then assembled with TRINITY, a de Bruijn graph-based assembler, using different k-mer sizes. The different assemblies were combined and clustered using CD-HIT. The assemblies were evaluated using TRANSRATE for quality and filtering, BUSCO for completeness, and TRANSDECODER for annotation potential. The 25-mer assembly was annotated using PANNZER (protein annotation with z-score) and BLASTX. The 25-mer assembly represents the best first draft assembly since it contains the most information. However, this assembly shows high levels of polymorphism, which currently cannot be differentiated as paralogs or allelic variants. Furthermore, this assembly is incomplete, which could be improved by sampling additional developmental stages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study assessed the effectiveness of a reciprocal teaching program as a method of teaching reading comprehension, using narrative text material in a t.ypical grade seven classroom. In order to determine the effectiveness of the reciprocal teaching program, this method was compared to two other reading instruction approaches that, unlike rcciprocal teaching, did not include social interaction components. Two intact grade scven classes, and a grade seven teacher, participated in this study. Students were appropriately assigned to three treatment groups by reading achievement level as determined from a norm-referenced test. Training proceeded for a five week intervention period during regularly scheduled English periods. Throughout the program curriculum-based tests were administered. These tests were designed to assess comprehension in two distinct ways; namely, character analysis components as they relate to narrative text, and strategy use components as they contribute to student understanding of narrative and expository text. Pre, post, and maintenance tests were administered to measure overall training effects. Moreover, during intervention, training probes were administered in the last period of each week to evaluate treatment group performance. AU curriculum-based tests were coded and comparisons of pre, post, maintenance tests and training probes were presented in graph form. Results showed that the reciprocal group achieved some improvement in reading comprehension scores in the strategy use component of the tests. No improvements were observed for the character analysis components of the curriculum-based tests and the norm-referenced tests. At pre and post intervention, interviews requiring students to respond to questions that addressed metacomprehension awareness of study strategies were administered. The intelviews were coded and comparisons were made between the two intelVicws. No significant improvements were observed regarding student awareness of ten identified study strategies . This study indicated that reciprocal teaching is a viable approach that can be utilized to help students acquire more effective comprehension strategies. However, the maximum utility of the technique when administered to a population of grade seven students performing at average to above average levels of reading achievement has yet to be determined. In order to explore this issue, the refinement of training materials and curriculum-based measurements need to be explored. As well, this study revealed that reciprocal teaching placed heavier demands on the classroom teacher when compared to other reading instruction methods. This may suggest that innovative and intensive teacher training techniques are required before it is feasible to use this method in the classroom.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study assessed the usefulness of a cognitive behavior modification (CBM) intervention package with mentally retarded students in overcoming learned helplessness and improving learning strategies. It also examined the feasibility of instructing teachers in the use of such a training program for a classroom setting. A modified single subject design across individuals was employed using two groups of three subjects. Three students from each of two segregated schools for the mentally retarded were selected using a teacher questionnaire and pupil checklist of the most learned helpless students enrolled there. Three additional learned helplessness assessments were conducted on each subject before and after the intervention in order to evaluate the usefulness of the program in alleviating learned helplessness. A classroom environment was created with the three students from each school engaged in three twenty minute work sessions a week with the experimenter and a tutor experimenter (TE) as instructors. Baseline measurements were established on seven targeted behaviors for each subject: task-relevant speech, task-irrelevant speech, speech denoting a positive evaluation of performance, speech denoting a negative evaluation of performance, proportion of time on task, non-verbal positive evaluation of performance and non-verbal negative evaluation of performance. The intervention package combined a variety of CBM techniques such as Meichenbaum's (1977) Stop, Look and Listen approach, role rehearsal and feedback. During the intervention each subject met with his TE twice a week for an individual half-hour session and one joint twenty minute session with all three students, the experimentor and one TE. Five weeks after the end of this experiment one follow up probe was conducted. All baseline, post-intervention and probe sessions were videotaped. The seven targeted behaviors were coded and comparisons of baseline, post intervention, and probe testing were presented in graph form. Results showed a reduction in learned helplessness in all subjects. Improvement was noted in each of the seven targeted behaviors for each of the six subjects. This study indicated that mentally retarded children can be taught to reduce learned helplessness with the aid of a CBM intervention package. It also showed that CBM is a viable approach in helping mentally retarded students acquire more effective learning strategies. Because the TEs (Tutor experimenters) had no trouble learning and implementing this program, it was considered feasible for teachers to use similar methods in the classroom.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The (n, k)-star interconnection network was proposed in 1995 as an attractive alternative to the n-star topology in parallel computation. The (n, k )-star has significant advantages over the n-star which itself was proposed as an attractive alternative to the popular hypercube. The major advantage of the (n, k )-star network is its scalability, which makes it more flexible than the n-star as an interconnection network. In this thesis, we will focus on finding graph theoretical properties of the (n, k )-star as well as developing parallel algorithms that run on this network. The basic topological properties of the (n, k )-star are first studied. These are useful since they can be used to develop efficient algorithms on this network. We then study the (n, k )-star network from algorithmic point of view. Specifically, we will investigate both fundamental and application algorithms for basic communication, prefix computation, and sorting, etc. A literature review of the state-of-the-art in relation to the (n, k )-star network as well as some open problems in this area are also provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The (n, k)-arrangement interconnection topology was first introduced in 1992. The (n, k )-arrangement graph is a class of generalized star graphs. Compared with the well known n-star, the (n, k )-arrangement graph is more flexible in degree and diameter. However, there are few algorithms designed for the (n, k)-arrangement graph up to present. In this thesis, we will focus on finding graph theoretical properties of the (n, k)- arrangement graph and developing parallel algorithms that run on this network. The topological properties of the arrangement graph are first studied. They include the cyclic properties. We then study the problems of communication: broadcasting and routing. Embedding problems are also studied later on. These are very useful to develop efficient algorithms on this network. We then study the (n, k )-arrangement network from the algorithmic point of view. Specifically, we will investigate both fundamental and application algorithms such as prefix sums computation, sorting, merging and basic geometry computation: finding convex hull on the (n, k )-arrangement graph. A literature review of the state-of-the-art in relation to the (n, k)-arrangement network is also provided, as well as some open problems in this area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract: Root and root finding are concepts familiar to most branches of mathematics. In graph theory, H is a square root of G and G is the square of H if two vertices x,y have an edge in G if and only if x,y are of distance at most two in H. Graph square is a basic operation with a number of results about its properties in the literature. We study the characterization and recognition problems of graph powers. There are algorithmic and computational approaches to answer the decision problem of whether a given graph is a certain power of any graph. There are polynomial time algorithms to solve this problem for square of graphs with girth at least six while the NP-completeness is proven for square of graphs with girth at most four. The girth-parameterized problem of root fining has been open in the case of square of graphs with girth five. We settle the conjecture that recognition of square of graphs with girth 5 is NP-complete. This result is providing the complete dichotomy theorem for square root finding problem.