881 resultados para Task Assignment


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives The current study investigated to what extent task-specific practice can help reduce the adverse effects of high-pressure on performance in a simulated penalty kick task. Based on the assumption that practice attenuates the required attentional resources, it was hypothesized that task-specific practice would enhance resilience against high-pressure. Method Participants practiced a simulated penalty kick in which they had to move a lever to the side opposite to the goalkeeper's dive. The goalkeeper moved at different times before ball-contact. Design Before and after task-specific practice, participants were tested on the same task both under low- and high-pressure conditions. Results Before practice, performance of all participants worsened under high-pressure; however, whereas one group of participants merely required more time to correctly respond to the goalkeeper movement and showed a typical logistic relation between the percentage of correct responses and the time available to respond, a second group of participants showed a linear relationship between the percentage of correct responses and the time available to respond. This implies that they tended to make systematic errors for the shortest times available. Practice eliminated the debilitating effects of high-pressure in the former group, whereas in the latter group high-pressure continued to negatively affect performance. Conclusions Task-specific practice increased resilience to high-pressure. However, the effect was a function of how participants responded initially to high-pressure, that is, prior to practice. The results are discussed within the framework of attentional control theory (ACT).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A prevalent claim is that we are in knowledge economy. When we talk about knowledge economy, we generally mean the concept of “Knowledge-based economy” indicating the use of knowledge and technologies to produce economic benefits. Hence knowledge is both tool and raw material (people’s skill) for producing some kind of product or service. In this kind of environment economic organization is undergoing several changes. For example authority relations are less important, legal and ownership-based definitions of the boundaries of the firm are becoming irrelevant and there are only few constraints on the set of coordination mechanisms. Hence what characterises a knowledge economy is the growing importance of human capital in productive processes (Foss, 2005) and the increasing knowledge intensity of jobs (Hodgson, 1999). Economic processes are also highly intertwined with social processes: they are likely to be informal and reciprocal rather than formal and negotiated. Another important point is also the problem of the division of labor: as economic activity becomes mainly intellectual and requires the integration of specific and idiosyncratic skills, the task of dividing the job and assigning it to the most appropriate individuals becomes arduous, a “supervisory problem” (Hogdson, 1999) emerges and traditional hierarchical control may result increasingly ineffective. Not only specificity of know how makes it awkward to monitor the execution of tasks, more importantly, top-down integration of skills may be difficult because ‘the nominal supervisors will not know the best way of doing the job – or even the precise purpose of the specialist job itself – and the worker will know better’ (Hogdson,1999). We, therefore, expect that the organization of the economic activity of specialists should be, at least partially, self-organized. The aim of this thesis is to bridge studies from computer science and in particular from Peer-to-Peer Networks (P2P) to organization theories. We think that the P2P paradigm well fits with organization problems related to all those situation in which a central authority is not possible. We believe that P2P Networks show a number of characteristics similar to firms working in a knowledge-based economy and hence that the methodology used for studying P2P Networks can be applied to organization studies. Three are the main characteristics we think P2P have in common with firms involved in knowledge economy: - Decentralization: in a pure P2P system every peer is an equal participant, there is no central authority governing the actions of the single peers; - Cost of ownership: P2P computing implies shared ownership reducing the cost of owing the systems and the content, and the cost of maintaining them; - Self-Organization: it refers to the process in a system leading to the emergence of global order within the system without the presence of another system dictating this order. These characteristics are present also in the kind of firm that we try to address and that’ why we have shifted the techniques we adopted for studies in computer science (Marcozzi et al., 2005; Hales et al., 2007 [39]) to management science.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La tesi riguarda lo sviluppo di un'applicazione che estende la possibilità di effettuare i caricamenti dei package di SAP BPC ai dispositivi mobile, fino ad ora questo era possibile solo attraverso l'interfaccia di Microsoft Excel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses the formulation of a referee assignment problem for the Italian Volleyball Serie A Championships. The problem has particular constraints such as a referee must be assigned to different teams in a given period of times, and the minimal/maximal level of workload for each referee is obtained by considering cost and profit in the objective function. The problem has been solved through an exact method by using an integer linear programming formulation and a clique based decomposition for improving the computing time. Extensive computational experiments on real-world instances have been performed to determine the effectiveness of the proposed approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is concerned with the role played by software tools in the analysis and dissemination of linguistic corpora and their contribution to a more widespread adoption of corpora in different fields. Chapter 1 contains an overview of some of the most relevant corpus analysis tools available today, presenting their most interesting features and some of their drawbacks. Chapter 2 begins with an explanation of the reasons why none of the available tools appear to satisfy the requirements of the user community and then continues with technical overview of the current status of the new system developed as part of this work. This presentation is followed by highlights of features that make the system appealing to users and corpus builders (i.e. scholars willing to make their corpora available to the public). The chapter concludes with an indication of future directions for the projects and information on the current availability of the software. Chapter 3 describes the design of an experiment devised to evaluate the usability of the new system in comparison to another corpus tool. Usage of the tool was tested in the context of a documentation task performed on a real assignment during a translation class in a master's degree course. In chapter 4 the findings of the experiment are presented on two levels of analysis: firstly a discussion on how participants interacted with and evaluated the two corpus tools in terms of interface and interaction design, usability and perceived ease of use. Then an analysis follows of how users interacted with corpora to complete the task and what kind of queries they submitted. Finally, some general conclusions are drawn and areas for future work are outlined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In questa tesi sono stati apportati due importanti contributi nel campo degli acceleratori embedded many-core. Abbiamo implementato un runtime OpenMP ottimizzato per la gestione del tasking model per sistemi a processori strettamente accoppiati in cluster e poi interconnessi attraverso una network on chip. Ci siamo focalizzati sulla loro scalabilità e sul supporto di task di granularità fine, come è tipico nelle applicazioni embedded. Il secondo contributo di questa tesi è stata proporre una estensione del runtime di OpenMP che cerca di prevedere la manifestazione di errori dati da fenomeni di variability tramite una schedulazione efficiente del carico di lavoro.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I neuroni in alcune regioni del nostro cervello mostrano una risposta a stimoli multisensoriali (ad es. audio-visivi) temporalmente e spazialmente coincidenti maggiore della risposta agli stessi stimoli presi singolarmente (integrazione multisensoriale). Questa abilità può essere sfruttata per compensare deficit unisensoriali, attraverso training multisensoriali che promuovano il rafforzamento sinaptico all’interno di circuiti comprendenti le regioni multisensoriali stimolate. Obiettivo della presente tesi è stato quello di studiare quali strutture e circuiti possono essere stimolate e rinforzate da un training multisensoriale audio-visivo. A tale scopo, sono stati analizzati segnali elettroencefalografici (EEG) registrati durante due diversi task di discriminazione visiva (discriminazione della direzione di movimento e discriminazione di orientazione di una griglia) eseguiti prima e dopo un training audio-visivo con stimoli temporalmente e spazialmente coincidenti, per i soggetti sperimentali, o spazialmente disparati, per i soggetti di controllo. Dai segnali EEG di ogni soggetto è stato ricavato il potenziale evento correlato (ERP) sullo scalpo, di cui si è analizzata la componente N100 (picco in 140÷180 ms post stimolo) verificandone variazioni pre/post training mediante test statistici. Inoltre, è stata ricostruita l’attivazione delle sorgenti corticali in 6239 voxel (suddivisi tra le 84 ROI coincidenti con le Aree di Brodmann) con l’ausilio del software sLORETA. Differenti attivazioni delle ROI pre/post training in 140÷180 ms sono state evidenziate mediante test statistici. I risultati suggeriscono che il training multisensoriale abbia rinforzato i collegamenti sinaptici tra il Collicolo Superiore e il Lobulo Parietale Inferiore (nell’area Area di Brodmann 7), una regione con funzioni visuo-motorie e di attenzione spaziale.