933 resultados para Computer Generated Proofs
Resumo:
Background: Phosphorylation by protein kinases is a common event in many cellular processes. Further, many kinases perform specialized roles and are regulated by non-kinase domains tethered to kinase domain. Perturbation in the regulation of kinases leads to malignancy. We have identified and analysed putative protein kinases encoded in the genome of chimpanzee which is a close evolutionary relative of human. Result: The shared core biology between chimpanzee and human is characterized by many orthologous protein kinases which are involved in conserved pathways. Domain architectures specific to chimp/human kinases have been observed. Chimp kinases with unique domain architectures are characterized by deletion of one or more non-kinase domains in the human kinases. Interestingly, counterparts of some of the multi-domain human kinases in chimp are characterized by identical domain architectures but with kinase-like non-kinase domain. Remarkably, out of 587 chimpanzee kinases no human orthologue with greater than 95% sequence identity could be identified for 160 kinases. Variations in chimpanzee kinases compared to human kinases are brought about also by differences in functions of domains tethered to the catalytic kinase domain. For example, the heterodimer forming PB1 domain related to the fold of ubiquitin/Ras-binding domain is seen uniquely tethered to PKC-like chimpanzee kinase. Conclusion: Though the chimpanzee and human are evolutionary very close, there are chimpanzee kinases with no close counterpart in the human suggesting differences in their functions. This analysis provides a direction for experimental analysis of human and chimpanzee protein kinases in order to enhance our understanding on their specific biological roles.
Resumo:
In this report an artificial neural network (ANN) based automated emergency landing site selection system for unmanned aerial vehicle (UAV) and general aviation (GA) is described. The system aims increase safety of UAV operation by emulating pilot decision making in emergency landing scenarios using an ANN to select a safe landing site from available candidates. The strength of an ANN to model complex input relationships makes it a perfect system to handle the multicriteria decision making (MCDM) process of emergency landing site selection. The ANN operates by identifying the more favorable of two landing sites when provided with an input vector derived from both landing site's parameters, the aircraft's current state and wind measurements. The system consists of a feed forward ANN, a pre-processor class which produces ANN input vectors and a class in charge of creating a ranking of landing site candidates using the ANN. The system was successfully implemented in C++ using the FANN C++ library and ROS. Results obtained from ANN training and simulations using randomly generated landing sites by a site detection simulator data verify the feasibility of an ANN based automated emergency landing site selection system.
Resumo:
The specific objective of this paper is to develop a state space model of a tubular ammonia reactor which is the heart of an ammonia plant in a fertiliser complex. A ninth order model with three control inputs and two disturbance inputs is generated from the nonlinear distributed model using linearization and lumping approximations. The lumped model is chosen such that the steady state temperature at the exit of the catalyst bed computed from the simplified state space model is close enough to the one computed from the nonlinear steady state model. The model developed in this paper is very useful for the design of continuous/discrete versions of single variable/multivariable control algorithms.
Resumo:
Eklundh's (1972) algorithm to transpose a large matrix stored on an external device such as a disc has been programmed and tested. A simple description of computer implementation is given in this note.
Resumo:
A new method of specifying the syntax of programming languages, known as hierarchical language specifications (HLS), is proposed. Efficient parallel algorithms for parsing languages generated by HLS are presented. These algorithms run on an exclusive-read exclusive-write parallel random-access machine. They require O(n) processors and O(log2n) time, where n is the length of the string to be parsed. The most important feature of these algorithms is that they do not use a stack.
Resumo:
n this paper we study the genericity of simultaneous stabilizability, simultaneous strong stabilizability, and simultaneous pole assignability, in linear multivariable systems. The main results of the paper had been previously established by Ghosh and Byrnes using state-space methods. In contrast, the proofs in the present paper are based on input-output arguments, and are much simpler to follow, especially in the case of simultaneous and simultaneous strong stabilizability. Moreover, the input-output methods used here suggest computationally reliable algorithms for solving these two types of problems. In addition to the main results, we also prove some lemmas on generic greatest common divisors which are of independent interest.
Resumo:
Using computer modeling of three-dimensional structures and structural information available on the crystal structures of HIV-1 protease, we investigated the structural effects of mutations, in treatment-naive and treatment-exposed individuals from India and postulated mechanisms of resistance in clade C variants. A large number of models (14) have been generated by computational mutation of the available crystal structures of drug bound proteases. Localized energy minimization was carried out in and around the sites of mutation in order to optimize the geometry of interactions present. Most of the mutations result in structural differences at the flap that favors the semiopen state of the enzyme. Some of the mutations were also found to confer resistance by affecting the geometry of the active site. The E35D mutation affects the flap structure in clade B strains and E35N and E35K mutation, seen in our modeled strains, have a more profound effect. Common polymorphisms at positions 36 and 63 in clade C also affected flap structure. Apart from a few other residues Gln-58, Asn-83, Asn-88, and Gln-92 and their interactions are important for the transition from the closed to the open state. Development of protease inhibitors by structure-based design requires investigation of mechanisms operative for clade C to improve the efficacy of therapy.
Resumo:
Distraction in the workplace is increasingly more common in the information age. Several tasks and sources of information compete for a worker's limited cognitive capacities in human-computer interaction (HCI). In some situations even very brief interruptions can have detrimental effects on memory. Nevertheless, in other situations where persons are continuously interrupted, virtually no interruption costs emerge. This dissertation attempts to reveal the mental conditions and causalities differentiating the two outcomes. The explanation, building on the theory of long-term working memory (LTWM; Ericsson and Kintsch, 1995), focuses on the active, skillful aspects of human cognition that enable the storage of task information beyond the temporary and unstable storage provided by short-term working memory (STWM). Its key postulate is called a retrieval structure an abstract, hierarchical knowledge representation built into long-term memory that can be utilized to encode, update, and retrieve products of cognitive processes carried out during skilled task performance. If certain criteria of practice and task processing are met, LTWM allows for the storage of large representations for long time periods, yet these representations can be accessed with the accuracy, reliability, and speed typical of STWM. The main thesis of the dissertation is that the ability to endure interruptions depends on the efficiency in which LTWM can be recruited for maintaing information. An observational study and a field experiment provide ecological evidence for this thesis. Mobile users were found to be able to carry out heavy interleaving and sequencing of tasks while interacting, and they exhibited several intricate time-sharing strategies to orchestrate interruptions in a way sensitive to both external and internal demands. Interruptions are inevitable, because they arise as natural consequences of the top-down and bottom-up control of multitasking. In this process the function of LTWM is to keep some representations ready for reactivation and others in a more passive state to prevent interference. The psychological reality of the main thesis received confirmatory evidence in a series of laboratory experiments. They indicate that after encoding into LTWM, task representations are safeguarded from interruptions, regardless of their intensity, complexity, or pacing. However, when LTWM cannot be deployed, the problems posed by interference in long-term memory and the limited capacity of the STWM surface. A major contribution of the dissertation is the analysis of when users must resort to poorer maintenance strategies, like temporal cues and STWM-based rehearsal. First, one experiment showed that task orientations can be associated with radically different patterns of retrieval cue encodings. Thus the nature of the processing of the interface determines which features will be available as retrieval cues and which must be maintained by other means. In another study it was demonstrated that if the speed of encoding into LTWM, a skill-dependent parameter, is slower than the processing speed allowed for by the task, interruption costs emerge. Contrary to the predictions of competing theories, these costs turned out to involve intrusions in addition to omissions. Finally, it was learned that in rapid visually oriented interaction, perceptual-procedural expectations guide task resumption, and neither STWM nor LTWM are utilized due to the fact that access is too slow. These findings imply a change in thinking about the design of interfaces. Several novel principles of design are presented, basing on the idea of supporting the deployment of LTWM in the main task.
Resumo:
The present study examined how personality and social psychological factors affect third and fourth graders' computer-mediated communication. Personality was analysed in terms of the following strategies: optimism, pessimism and defensive pessimism. Students worked either individually or in dyads which were paired homogeneously or heterogeneously according to the strategies. Moreover, the present study compared horizontal and vertical interaction. The study also examined the role that popularity plays, and students were divided into groups based on their popularity level. The results show that an optimistic strategy is useful. Optimism was found to be related to the active production and processing of ideas. Although previous research has identified drawbacks to pessimism in achievement settings, this study shows that the pessimistic strategy is not as debilitating a strategy as is usually assumed. Pessimistic students were able to process their ideas. However, defensive pessimists were somewhat cautious in introducing or changing ideas. Heterogeneous dyads were not beneficial configurations with respect to producing, introducing, or changing ideas. Moreover, many differences were found to exist between the horizontal and vertical interaction; specifically, the students expressed more opinions and feelings when teachers took no part in the discussions. Strong emotions were observed especially in the horizontal interaction. Further, group working skills were found to be more important for boys than for girls, while rejected students were not at a disadvantage compared to popular ones. Schools can encourage emotional and social learning. The present study shows that students can use computers to express their feelings. In addition, students who are unpopular in non-computer contexts or students who use pessimism can benefit from computers. Participation in computer discussions can give unpopular children a chance to develop confidence when relating to peers.
Resumo:
In this paper the notion of conceptual cohesiveness is precised and used to group objects semantically, based on a knowledge structure called ‘cohesion forest’. A set of axioms is proposed which should be satisfied to make the generated clusters meaningful.
Resumo:
Marker ordering during linkage map construction is a critical component of QTL mapping research. In recent years, high-throughput genotyping methods have become widely used, and these methods may generate hundreds of markers for a single mapping population. This poses problems for linkage analysis software because the number of possible marker orders increases exponentially as the number of markers increases. In this paper, we tested the accuracy of linkage analyses on simulated recombinant inbred line data using the commonly used Map Manager QTX (Manly et al. 2001: Mammalian Genome 12, 930-932) software and RECORD (Van Os et al. 2005: Theoretical and Applied Genetics 112, 30-40). Accuracy was measured by calculating two scores: % correct marker positions, and a novel, weighted rank-based score derived from the sum of absolute values of true minus observed marker ranks divided by the total number of markers. The accuracy of maps generated using Map Manager QTX was considerably lower than those generated using RECORD. Differences in linkage maps were often observed when marker ordering was performed several times using the identical dataset. In order to test the effect of reducing marker numbers on the stability of marker order, we pruned marker datasets focusing on regions consisting of tightly linked clusters of markers, which included redundant markers. Marker pruning improved the accuracy and stability of linkage maps because a single unambiguous marker order was produced that was consistent across replications of analysis. Marker pruning was also applied to a real barley mapping population and QTL analysis was performed using different map versions produced by the different programs. While some QTLs were identified with both map versions, there were large differences in QTL mapping results. Differences included maximum LOD and R-2 values at QTL peaks and map positions, thus highlighting the importance of marker order for QTL mapping
Resumo:
An in-depth knowledge about the characteristics of lightning generated currents will facilitate evaluation of the interception efficacy of lightning protection systems. In addition, it would aid in extraction of valuable statistics (from measured current data) on local lightning parameters. Incidentally, present day knowledge on characteristics of lightning induced current in typical lightning protection systems is rather limited. This is particularly true with closely interconnected protection systems, like the one employed in Indian Satellite Launch Pad-II. This system is taken as a specific example in the present study. Various aspects suggest that theoretical modelling would be the best possible approach for the intended work. From the survey of pertinent literature, it is concluded that electromagnetic modelling of lightning return-stroke with current source at the channel base is best suited for this study. Numerical electromagnetic code was used for the required electromagnetic field solution and Fourier transform techniques were employed for computing time-domain results. A validation for the numerical modelling is provided by laborator experiments on a reduced scale model of the system. Apart from ascertaining the influence of various parameters, salient characteristics of tower base currents for different kinds of events are deduced. This knowledge can be used in identifying the type of event, as well as its approximate location. A method for estimation of injected stroke current has also been proposed.
Resumo:
It is well known that the notions of normal forms and acyclicity capture many practical desirable properties for database schemes. The basic schema design problem is to develop design methodologies that strive toward these ideals. The usual approach is to first normalize the database scheme as far as possible. If the resulting scheme is cyclic, then one tries to transform it into an acyclic scheme. In this paper, we argue in favor of carrying out these two phases of design concurrently. In order to do this efficiently, we need to be able to incrementally analyze the acyclicity status of a database scheme as it is being designed. To this end, we propose the formalism of "binary decompositions". Using this, we characterize design sequences that exactly generate theta-acyclic schemes, for theta = agr,beta. We then show how our results can be put to use in database design. Finally, we also show that our formalism above can be effectively used as a proof tool in dependency theory. We demonstrate its power by showing that it leads to a significant simplification of the proofs of some previous results connecting sets of multivalued dependencies and acyclic join dependencies.