997 resultados para Code-multiplexed pilot sequence
Resumo:
L'objectiu del projecte ha estat la millora de la qualitat docent de l'assignatura Estructura de Computadors I, impartida a la Facultat d'Informàtica de Barcelona (UPC) dins els estudis d'Enginyeria Informàtica, Enginyeria Tècnica en Informàtica de Sistemes i Enginyeria Tècnica en Informàtica de Gestió. S'ha treballat en quatre línies d'actuació: (i) aplicació de tècniques d'aprenentatge actiu a les classes; (ii) aplicació de tècniques d'aprenentage cooperatiu no presencials; (iii) implantació de noves TIC i adaptació de les ja emprades per tal d'habilitar mecanismes d'autoavaluació i de realimentació de la informació referent a l'avaluació; i (iv) difusió de les experiències derivades de les diferents actuacions. Referent a les dues primeres mesures s'avalua l'impacte de metodologies docents que afavoreixen l'aprenentatge actiu tant de forma presencial com no presencial, obtenint-se clares millores en el rendiment respecte a altres metodologies utilitzades anteriorment enfocades a la realització de classes del tipus magistral, en què únicament es posa a l'abast dels alumnes la documentació de l'assignatura per a què puguin treballar de forma responsable. Les noves metodologies fan especial èmfasi en el treball en grup a classe i la compartició de les experiències fora de classe a través de fòrums de participació. La mesura que ha requerit més esforç en aquest projecte és la tercera, amb el desenvolupament d'un entorn d'interfície web orientat a la correcció automàtica de programes escrits en llenguatge assemblador. Aquest entorn permet l'autoavaluació per part dels alumnes dels exercicis realitzats a l'assignatura, amb obtenció d'informació detallada sobre les errades comeses. El treball realitzat dins d'aquest projecte s'ha publicat en congressos rellevants en l'àrea docent tant a nivell estatal com internacional. El codi font de l'entorn esmentat anteriorment es posa a disposició pública a través d'un enllaç a la web.
Resumo:
Molecular studies of insect disease vectors are of paramount importance for understanding parasite-vector relationship. Advances in this area have led to important findings regarding changes in vectors' physiology upon blood feeding and parasite infection. Mechanisms for interfering with the vectorial capacity of insects responsible for the transmission of diseases such as malaria, Chagas disease and dengue fever are being devised with the ultimate goal of developing transgenic insects. A primary necessity for this goal is information on gene expression and control in the target insect. Our group is investigating molecular aspects of the interaction between Leishmania parasites and Lutzomyia sand flies. As an initial step in our studies we have used random sequencing of cDNA clones from two expression libraries made from head/thorax and abdomen of sugar fed L. longipalpis for the identification of expressed sequence tags (EST). We applied differential display reverse transcriptase-PCR and randomly amplified polymorphic DNA-PCR to characterize differentially expressed mRNA from sugar and blood fed insects, and, in one case, from a L. (V.) braziliensis-infected L. longipalpis. We identified 37 cDNAs that have shown homology to known sequences from GeneBank. Of these, 32 cDNAs code for constitutive proteins such as zinc finger protein, glutamine synthetase, G binding protein, ubiquitin conjugating enzyme. Three are putative differentially expressed cDNAs from blood fed and Leishmania-infected midgut, a chitinase, a V-ATPase and a MAP kinase. Finally, two sequences are homologous to Drosophila melanogaster gene products recently discovered through the Drosophila genome initiative.
Resumo:
Des de l’entrada en vigor del nou Codi Penal de 1995, la Direcció General de Mesures Penals Alternatives i Justícia Juvenil ha realitzat un nombre reduït de mediacions. Al novembre de 1998, es va iniciar un programa pilot en alguns jutjats penals i d’instrucció de Catalunya, basat en l’anàlisi de les possibilitats d’aplicació de la reparació a la víctima. També s’estudiaven les primeres experiències de mediació i les tècniques de mediació adquirides pels professionals de la Direcció General en els àmbits de la mediació penal juvenil i la mediació familiar. En aquest estudi, s’avaluen els objectius respecte de l’infractor (responsabilització de les accions i reparació), la víctima (participació en la resolució, sentir-se reparada...), la justícia (promoure la responsabilització, la reparació i la pau social, garantir la resolució...) i la comunitat (apropar la justícia als ciutadans). Els resultats mostren la rellevància que està adquirint la justícia restauradora i la necessitat de facilitar eines perquè es pugui desenvolupar, tot i que de moment el nombre de casos derivats ha estat baix. S’ha demostrat la necessitat de flexibilitat en el tractament dels casos a causa de la diversitat de conflictes. També s’ha vist la necessitat de promoure la formació dels professionals que hi intervenen i d’impulsar reformes legislatives.
Resumo:
Conventional methods of gene prediction rely on the recognition of DNA-sequence signals, the coding potential or the comparison of a genomic sequence with a cDNA, EST, or protein database. Reasons for limited accuracy in many circumstances are species-specific training and the incompleteness of reference databases. Lately, comparative genome analysis has attracted increasing attention. Several analysis tools that are based on human/mouse comparisons are already available. Here, we present a program for the prediction of protein-coding genes, termed SGP-1 (Syntenic Gene Prediction), which is based on the similarity of homologous genomic sequences. In contrast to most existing tools, the accuracy of SGP-1 depends little on species-specific properties such as codon usage or the nucleotide distribution. SGP-1 may therefore be applied to nonstandard model organisms in vertebrates as well as in plants, without the need for extensive parameter training. In addition to predicting genes in large-scale genomic sequences, the program may be useful to validate gene structure annotations from databases. To this end, SGP-1 output also contains comparisons between predicted and annotated gene structures in HTML format. The program can be accessed via a Web server at http://soft.ice.mpg.de/sgp-1. The source code, written in ANSI C, is available on request from the authors.
Resumo:
(from the journal abstract) Scientific interest for the concept of alliance has been maintained and stimulated by repeated findings that a strong alliance is associated with facilitative treatment process and favourable treatment outcome. However, because the alliance is not in itself a therapeutic technique, these findings were unsuccessful in bringing about significant improvements in clinical practice. An essential issue in modern psychotherapeutic research concerns the relation between common factors which are known to explain great variance in empirical results and the specific therapeutic techniques which are the primary basis of clinical training and practice. This pilot study explored sequences in therapist interventions over four sessions of brief psychodynamic investigation. It aims at determining if patterns of interventions can be found during brief psychodynamic investigation and if these patterns can be associated with differences in the therapeutic alliance. Therapist interventions where coded using the Psychodynamic Intervention Rating Scale (PIRS) which enables the classification of each therapist utterance into one of 9 categories of interpretive interventions (defence interpretation, transference interpretation), supportive interventions (question, clarification, association, reflection, supportive strategy) or interventions about the therapeutic frame (work-enhancing statement, contractual arrangement). Data analysis was done using lag sequential analysis, a statistical procedure which identifies contingent relationships in time among a large number of behaviours. The sample includes N = 20 therapist-patient dyads assigned to three groups with: (1) a high and stable alliance profile, (2) a low and stable alliance profile and (3) an improving alliance profile. Results suggest that therapists most often have one single intention when interacting with patients. Large sequences of questions, associations and clarifications were found, which indicate that if a therapist asks a question, clarifies or associates, there is a significant probability that he will continue doing so. A single theme sequence involving frame interventions was also observed. These sequences were found in all three alliance groups. One exception was found for mixed sequences of interpretations and supportive interventions. The simultaneous use of these two interventions was associated with a high or an improving alliance over the course of treatment, but not with a low and stable alliance where only single theme sequences of interpretations were found. In other words, in this last group, therapists were either supportive or interpretative, whereas with high or improving alliance, interpretations were always given along with supportive interventions. This finding provides evidence that examining therapist interpretation individually can only yield incomplete findings. How interpretations were given is important for alliance building. It also suggests that therapists should carefully dose their interpretations and be supportive when necessary in order to build a strong therapeutic alliance. And from a research point of view, to study technical interventions, we must look into dynamic variables such as dosage, the supportive quality of an intervention, and timing. (PsycINFO Database Record (c) 2005 APA, all rights reserved)
Resumo:
Modern computer systems are plagued with stability and security problems: applications lose data, web servers are hacked, and systems crash under heavy load. Many of these problems or anomalies arise from rare program behavior caused by attacks or errors. A substantial percentage of the web-based attacks are due to buffer overflows. Many methods have been devised to detect and prevent anomalous situations that arise from buffer overflows. The current state-of-art of anomaly detection systems is relatively primitive and mainly depend on static code checking to take care of buffer overflow attacks. For protection, Stack Guards and I-leap Guards are also used in wide varieties.This dissertation proposes an anomaly detection system, based on frequencies of system calls in the system call trace. System call traces represented as frequency sequences are profiled using sequence sets. A sequence set is identified by the starting sequence and frequencies of specific system calls. The deviations of the current input sequence from the corresponding normal profile in the frequency pattern of system calls is computed and expressed as an anomaly score. A simple Bayesian model is used for an accurate detection.Experimental results are reported which show that frequency of system calls represented using sequence sets, captures the normal behavior of programs under normal conditions of usage. This captured behavior allows the system to detect anomalies with a low rate of false positives. Data are presented which show that Bayesian Network on frequency variations responds effectively to induced buffer overflows. It can also help administrators to detect deviations in program flow introduced due to errors.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
The present Thesis looks at the problem of protein folding using Monte Carlo and Langevin simulations, three topics in protein folding have been studied: 1) the effect of confining potential barriers, 2) the effect of a static external field and 3) the design of amino acid sequences which fold in a short time and which have a stable native state (global minimum). Regarding the first topic, we studied the confinement of a small protein of 16 amino acids known as 1NJ0 (PDB code) which has a beta-sheet structure as a native state. The confinement of proteins occurs frequently in the cell environment. Some molecules called Chaperones, present in the cytoplasm, capture the unfolded proteins in their interior and avoid the formation of aggregates and misfolded proteins. This mechanism of confinement mediated by Chaperones is not yet well understood. In the present work we considered two kinds of potential barriers which try to mimic the confinement induced by a Chaperon molecule. The first kind of potential was a purely repulsive barrier whose only effect is to create a cavity where the protein folds up correctly. The second kind of potential was a barrier which includes both attractive and repulsive effects. We performed Wang-Landau simulations to calculate the thermodynamical properties of 1NJ0. From the free energy landscape plot we found that 1NJ0 has two intermediate states in the bulk (without confinement) which are clearly separated from the native and the unfolded states. For the case of the purely repulsive barrier we found that the intermediate states get closer to each other in the free energy landscape plot and eventually they collapse into a single intermediate state. The unfolded state is more compact, compared to that in the bulk, as the size of the barrier decreases. For an attractive barrier modifications of the states (native, unfolded and intermediates) are observed depending on the degree of attraction between the protein and the walls of the barrier. The strength of the attraction is measured by the parameter $\epsilon$. A purely repulsive barrier is obtained for $\epsilon=0$ and a purely attractive barrier for $\epsilon=1$. The states are changed slightly for magnitudes of the attraction up to $\epsilon=0.4$. The disappearance of the intermediate states of 1NJ0 is already observed for $\epsilon =0.6$. A very high attractive barrier ($\epsilon \sim 1.0$) produces a completely denatured state. In the second topic of this Thesis we dealt with the interaction of a protein with an external electric field. We demonstrated by means of computer simulations, specifically by using the Wang-Landau algorithm, that the folded, unfolded, and intermediate states can be modified by means of a field. We have found that an external field can induce several modifications in the thermodynamics of these states: for relatively low magnitudes of the field ($<2.06 \times 10^8$ V/m) no major changes in the states are observed. However, for higher magnitudes than ($6.19 \times 10^8$ V/m) one observes the appearance of a new native state which exhibits a helix-like structure. In contrast, the original native state is a $\beta$-sheet structure. In the new native state all the dipoles in the backbone structure are aligned parallel to the field. The design of amino acid sequences constitutes the third topic of the present work. We have tested the Rate of Convergence criterion proposed by D. Gridnev and M. Garcia ({\it work unpublished}). We applied it to the study of off-lattice models. The Rate of Convergence criterion is used to decide if a certain sequence will fold up correctly within a relatively short time. Before the present work, the common way to decide if a certain sequence was a good/bad folder was by performing the whole dynamics until the sequence got its native state (if it existed), or by studying the curvature of the potential energy surface. There are some difficulties in the last two approaches. In the first approach, performing the complete dynamics for hundreds of sequences is a rather challenging task because of the CPU time needed. In the second approach, calculating the curvature of the potential energy surface is possible only for very smooth surfaces. The Rate of Convergence criterion seems to avoid the previous difficulties. With this criterion one does not need to perform the complete dynamics to find the good and bad sequences. Also, the criterion does not depend on the kind of force field used and therefore it can be used even for very rugged energy surfaces.
Resumo:
The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo Simulation procedure.Program summaryTitle of program: STATFLUXCatalogue identifier: ADYS_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions: noneComputer for which the program is designed and others on which it has been tested: Micro-computer with Intel Pentium III, 3.0 GHzInstallation: Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, BrazilOperating system: Windows 2000 and Windows XPProgramming language used: Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program.Memory, required to execute with typical data: 8 Mbytes of RAM memory and 100 MB of Hard disk memoryNo. of bits in a word: 16No. of lines in distributed program, including test data, etc.: 6912No. of bytes in distributed Program, including test data, etc.: 229 541Distribution format: tar.gzNature of the physical problem: the investigation of transport mechanisms for radioactive substances, through environmental pathways, is very important for radiological protection of populations. One such pathway, associated with the food chain, is the grass-animal-man sequence. The distribution of trace elements in humans and laboratory animals has been intensively studied over the past 60 years [R.C. Pendlenton, C.W. Mays, R.D. Lloyd, A.L. Brooks, Differential accumulation of iodine-131 from local fallout in people and milk, Health Phys. 9 (1963) 1253-1262]. In addition, investigations on the incidence of cancer in humans, and a possible causal relationship to radioactive fallout, have been undertaken [E.S. Weiss, M.L. Rallison, W.T. London, W.T. Carlyle Thompson, Thyroid nodularity in southwestern Utah school children exposed to fallout radiation, Amer. J. Public Health 61 (1971) 241-249; M.L. Rallison, B.M. Dobyns, F.R. Keating, J.E. Rall, F.H. Tyler, Thyroid diseases in children, Amer. J. Med. 56 (1974) 457-463; J.L. Lyon, M.R. Klauber, J.W. Gardner, K.S. Udall, Childhood leukemia associated with fallout from nuclear testing, N. Engl. J. Med. 300 (1979) 397-402]. From the pathways of entry of radionuclides in the human (or animal) body, ingestion is the most important because it is closely related to life-long alimentary (or dietary) habits. Those radionuclides which are able to enter the living cells by either metabolic or other processes give rise to localized doses which can be very high. The evaluation of these internally localized doses is of paramount importance for the assessment of radiobiological risks and radiological protection. The time behavior of trace concentration in organs is the principal input for prediction of internal doses after acute or chronic exposure. The General Multiple-Compartment Model (GMCM) is the powerful and more accepted method for biokinetical studies, which allows the calculation of concentration of trace elements in organs as a function of time, when the flow parameters of the model are known. However, few biokinetics data exist in the literature, and the determination of flow and transfer parameters by statistical fitting for each system is an open problem.Restriction on the complexity of the problem: This version of the code works with the constant volume approximation, which is valid for many situations where the biological half-live of a trace is lower than the volume rise time. Another restriction is related to the central flux model. The model considered in the code assumes that exist one central compartment (e.g., blood), that connect the flow with all compartments, and the flow between other compartments is not included.Typical running time: Depends on the choice for calculations. Using the Derivative Method the time is very short (a few minutes) for any number of compartments considered. When the Gauss-Marquardt iterative method is used the calculation time can be approximately 5-6 hours when similar to 15 compartments are considered. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
DNA barcoding facilitates the identification of species and the estimation of biodiversity by using nucleotide sequences, usually from the mitochondrial genome. Most studies accomplish this task by using the gene encoding cytochrome oxidase subunit I (COI; Entrez COX1). Within this barcoding framework, many taxonomic initiatives exist, such as those specializing in fishes, birds, mammals, and fungi. Other efforts center on regions, such as the Arctic, or on other topics, such as health. DNA barcoding initiatives exist for all groups of vertebrates except for amphibians and nonavian reptiles. We announce the formation of Cold Code, the international initiative to DNA barcode all species of these 'cold-blooded' vertebrates. The project has a Steering Committee, Coordinators, and a home page. To facilitate Cold Code, the Kunming Institute of Zoology, Chinese Academy of Sciences will sequence COI for the first 10 specimens of a species at no cost to the steward of the tissues. © 2012 Blackwell Publishing Ltd.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.
Resumo:
Context. Lithium abundances in open clusters are a very effective probe of mixing processes, and their study can help us to understand the large depletion of lithium that occurs in the Sun. Owing to its age and metallicity, the open cluster M 67 is especially interesting on this respect. Many studies of lithium abundances in M 67 have been performed, but a homogeneous global analysis of lithium in stars from subsolar masses and extending to the most massive members, has yet to be accomplished for a large sample based on high-quality spectra. Aims. We test our non-standard models, which were calibrated using the Sun with observational data. Methods. We collect literature data to analyze, for the first time in a homogeneous way, the non-local thermal equilibrium lithium abundances of all observed single stars in M 67 more massive than similar to 0.9 M-circle dot. Our grid of evolutionary models is computed assuming a non-standard mixing at metallicity [Fe/H] = 0.01, using the Toulouse-Geneva evolution code. Our analysis starts from the entrance into the zero-age main-sequence. Results. Lithium in M 67 is a tight function of mass for stars more massive than the Sun, apart from a few outliers. A plateau in lithium abundances is observed for turn-off stars. Both less massive (M >= 1.10 M-circle dot) and more massive (M >= 1.28 M-circle dot) stars are more depleted than those in the plateau. There is a significant scatter in lithium abundances for any given mass M <= 1.1 M-circle dot. Conclusions. Our models qualitatively reproduce most of the features described above, although the predicted depletion of lithium is 0.45 dex smaller than observed for masses in the plateau region, i.e. between 1.1 and 1.28 solar masses. More work is clearly needed to accurately reproduce the observations. Despite hints that chromospheric activity and rotation play a role in lithium depletion, no firm conclusion can be drawn with the presently available data.