86 resultados para Copy editing
Resumo:
The goals of the human genome project did not include sequencing of the heterochromatic regions. We describe here an initial sequence of 1.1 Mb of the short arm of human chromosome 21 (HSA21p), estimated to be 10% of 21p. This region contains extensive euchromatic-like sequence and includes on average one transcript every 100 kb. These transcripts show multiple inter- and intrachromosomal copies, and extensive copy number and sequence variability. The sequencing of the "heterochromatic" regions of the human genome is likely to reveal many additional functional elements and provide important evolutionary information.
Resumo:
In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.
Resumo:
This paper describes a pilot study centred on the technology-enhanced self-development of competences in lifelong learning education carried out in the challenging context of the Association of Participants Àgora. The pilot study shows that the use of the TENCompetence infrastructure, i.e. in this case the Personal Development Planner tool, provides various kinds of benefits for adult participants with low educational profiles and who are traditionally excluded from the use of innovative learning technologies and the knowledge society. The selforganized training supported by the PDP tool aims at allowing the learners to create and control their own learning plans based on their interests and educational background including informal and non-formal experiences. In this sense, the pilot participants had the opportunity to develop and improve their competences in English language (basic and advanced levels) and ICT competence profiles which are mostly related to functional and communicative skills. Besides, the use of the PDP functionalities, such as the self-assessment, the planning and the self-regulating elements allowed the participants to develop reflective skills. Pilot results also provide indications for future developments in the field of technology support for self-organized learners. The paper introduces the context and the pilot scenario, indicates the evaluation methodology applied and discusses the most significant findings derived from the pilot study.
Resumo:
This paper introduces Collage, a high-level IMS-LD compliant authoring tool that is specialized for CSCL (Computer-Supported Collaborative Learning). Nowadays CSCL is a key trend in elearning since it highlights the importance of social interactions as an essential element of learning. CSCL is an interdisciplinary domain, which demands participatory design techniques that allow teachers to get directly involved in design activities. Developing CSCL designs using LD is a difficult task for teachers since LD is a complex technical specification and modelling collaborative characteristics can be tricky. Collage helps teachers in the process of creating their own potentially effective collaborative Learning Designs by reusing and customizing patterns, according to the requirements of a particular learning situation. These patterns, called Collaborative Learning Flow Patterns (CLFPs), represent best practices that are repetitively used by practitioners when structuring the flow of (collaborative) learning activities. An example of an LD that can be created using Collage is illustrated in the paper. Preliminary evaluation results show that teachers, with experience in CL but without LD knowledge, can successfully design real collaborative learning experiences using Collage.
Resumo:
The identification and integration of reusable and customizable CSCL (Computer Supported Collaborative Learning) may benefit from the capture of best practices in collaborative learning structuring. The authors have proposed CLFPs (Collaborative Learning Flow Patterns) as a way of collecting these best practices. To facilitate the process of CLFPs by software systems, the paper proposes to specify these patterns using IMS Learning Design (IMS-LD). Thus, teachers without technical knowledge can particularize and integrate CSCL tools. Nevertheless, the support of IMS-LD for describing collaborative learning activities has some deficiencies: the collaborative tools that can be defined in these activities are limited. Thus, this paper proposes and discusses an extension to IMS-LD that enables to specify several characteristics of the use of tools that mediate collaboration. In order to obtain a Unit of Learning based on a CLFP, a three stage process is also proposed. A CLFP-based Unit of Learning example is used to illustrate the process and the need of the proposed extension.
Resumo:
Purpose: To evaluate the suitability of an improved version of an automatic segmentation method based on geodesic active regions (GAR) for segmenting cerebral vasculature with aneurysms from 3D X-ray reconstruc-tion angiography (3DRA) and time of °ight magnetic resonance angiography (TOF-MRA) images available in the clinical routine.Methods: Three aspects of the GAR method have been improved: execution time, robustness to variability in imaging protocols and robustness to variability in image spatial resolutions. The improved GAR was retrospectively evaluated on images from patients containing intracranial aneurysms in the area of the Circle of Willis and imaged with two modalities: 3DRA and TOF-MRA. Images were obtained from two clinical centers, each using di®erent imaging equipment. Evaluation included qualitative and quantitative analyses ofthe segmentation results on 20 images from 10 patients. The gold standard was built from 660 cross-sections (33 per image) of vessels and aneurysms, manually measured by interventional neuroradiologists. GAR has also been compared to an interactive segmentation method: iso-intensity surface extraction (ISE). In addition, since patients had been imaged with the two modalities, we performed an inter-modality agreement analysis with respect to both the manual measurements and each of the two segmentation methods. Results: Both GAR and ISE di®ered from the gold standard within acceptable limits compared to the imaging resolution. GAR (ISE, respectively) had an average accuracy of 0.20 (0.24) mm for 3DRA and 0.27 (0.30) mm for TOF-MRA, and had a repeatability of 0.05 (0.20) mm. Compared to ISE, GAR had a lower qualitative error in the vessel region and a lower quantitative error in the aneurysm region. The repeatabilityof GAR was superior to manual measurements and ISE. The inter-modality agreement was similar between GAR and the manual measurements. Conclusions: The improved GAR method outperformed ISE qualitatively as well as quantitatively and is suitable for segmenting 3DRA and TOF-MRA images from clinical routine.
Resumo:
Purpose: The objective of this study is to investigate the feasibility of detecting and quantifying 3D cerebrovascular wall motion from a single 3D rotational x-ray angiography (3DRA) acquisition within a clinically acceptable time and computing from the estimated motion field for the further biomechanical modeling of the cerebrovascular wall. Methods: The whole motion cycle of the cerebral vasculature is modeled using a 4D B-spline transformation, which is estimated from a 4D to 2D + t image registration framework. The registration is performed by optimizing a single similarity metric between the entire 2D + t measured projection sequence and the corresponding forward projections of the deformed volume at their exact time instants. The joint use of two acceleration strategies, together with their implementation on graphics processing units, is also proposed so as to reach computation times close to clinical requirements. For further characterizing vessel wall properties, an approximation of the wall thickness changes is obtained through a strain calculation. Results: Evaluation on in silico and in vitro pulsating phantom aneurysms demonstrated an accurate estimation of wall motion curves. In general, the error was below 10% of the maximum pulsation, even in the situation when substantial inhomogeneous intensity pattern was present. Experiments on in vivo data provided realistic aneurysm and vessel wall motion estimates, whereas in regions where motion was neither visible nor anatomically possible, no motion was detected. The use of the acceleration strategies enabled completing the estimation process for one entire cycle in 5-10 min without degrading the overall performance. The strain map extracted from our motion estimation provided a realistic deformation measure of the vessel wall. Conclusions: The authors' technique has demonstrated that it can provide accurate and robust 4D estimates of cerebrovascular wall motion within a clinically acceptable time, although it has to be applied to a larger patient population prior to possible wide application to routine endovascular procedures. In particular, for the first time, this feasibility study has shown that in vivo cerebrovascular motion can be obtained intraprocedurally from a 3DRA acquisition. Results have also shown the potential of performing strain analysis using this imaging modality, thus making possible for the future modeling of biomechanical properties of the vascular wall.
Resumo:
Morphological descriptors are practical and essential biomarkers for diagnosis andtreatment selection for intracranial aneurysm management according to the current guidelinesin use. Nevertheless, relatively little work has been dedicated to improve the three-dimensionalquanti cation of aneurysmal morphology, automate the analysis, and hence reduce the inherentintra- and inter-observer variability of manual analysis. In this paper we propose a methodologyfor the automated isolation and morphological quanti cation of saccular intracranial aneurysmsbased on a 3D representation of the vascular anatomy.
Resumo:
Virgin olive oil (VOO) is considered to be one of the main components responsible for the health benefits of the Mediterranean diet, particularly against atherosclerosis where peripheral blood mononuclear cells (PBMNCs) play a crucial role in atherosclerosis development and progression. The objective of this article was to identify the PBMNC genes that respond to VOO consumption in order to ascertain the molecular mechanisms underlying the beneficial action of VOO in the prevention of atherosclerosis. Gene expression profiles of PBMNCs from healthy individuals were examined in pooled RNA samples by microarrays after 3 weeks of moderate and regular consumption of VOO, as the main fat source in a diet controlled for antioxidant content. Gene expression was verified by qPCR. The response to VOO consumption was confirmed for individual samples (n = 10) by qPCR for 10 upregulated genes (ADAM17, ALDH1A1, BIRC1, ERCC5, LIAS, OGT, PPARBP, TNFSF10, USP48, and XRCC5). Their putative role in the molecular mechanisms involved in atherosclerosis development and progression is discussed, focusing on a possible relation with VOO consumption. Our data support the hypothesis that 3 weeks of nutritional intervention with VOO supplementation, at doses common in the Mediterranean diet, can alter the expression of genes related to atherosclerosis development and progression.
Resumo:
This paper argues that a large technological innovation may lead to a merger wave by inducing entrepreneurs to seek funds from technologically knowledgeable firms -experts. When a large technological innovation occurs, the ability of non-experts (banks) to discriminate between good and bad quality projects is reduced. Experts can continue to charge a low rate of interest for financing because their expertise enables them to identify good quality projects and to avoid unprofitable investments. On the other hand, non-experts now charge a higher rate of interest in order to screen bad projects. More entrepreneurs, therefore, disclose their projects to experts to raise funds from them. Such experts are, however, able to copy the projects and disclosure to them invites the possibility of competition. Thus the entrepreneur and the expert may merge so as to achieve product market collusion. As well as rationalizing mergers, the model can also explain various forms of venture financing by experts such as corporate investors and business angels.
Resumo:
Immunity-related GTPases (IRG) play an important role in defense against intracellular pathogens. One member of this gene family in humans, IRGM, has been recently implicated as a risk factor for Crohn's disease. We analyzed the detailed structure of this gene family among primates and showed that most of the IRG gene cluster was deleted early in primate evolution, after the divergence of the anthropoids from prosimians ( about 50 million years ago). Comparative sequence analysis of New World and Old World monkey species shows that the single-copy IRGM gene became pseudogenized as a result of an Alu retrotransposition event in the anthropoid common ancestor that disrupted the open reading frame (ORF). We find that the ORF was reestablished as a part of a polymorphic stop codon in the common ancestor of humans and great apes. Expression analysis suggests that this change occurred in conjunction with the insertion of an endogenous retrovirus, which altered the transcription initiation, splicing, and expression profile of IRGM. These data argue that the gene became pseudogenized and was then resurrected through a series of complex structural events and suggest remarkable functional plasticity where alleles experience diverse evolutionary pressures over time. Such dynamism in structure and evolution may be critical for a gene family locked in an arms race with an ever-changing repertoire of intracellular parasites.
Resumo:
It is generally accepted that the extent of phenotypic change between human and great apes is dissonant with the rate of molecular change. Between these two groups, proteins are virtually identical, cytogenetically there are few rearrangements that distinguish ape-human chromosomes, and rates of single-base-pair change and retrotransposon activity have slowed particularly within hominid lineages when compared to rodents or monkeys. Studies of gene family evolution indicate that gene loss and gain are enriched within the primate lineage. Here, we perform a systematic analysis of duplication content of four primate genomes (macaque, orang-utan, chimpanzee and human) in an effort to understand the pattern and rates of genomic duplication during hominid evolution. We find that the ancestral branch leading to human and African great apes shows the most significant increase in duplication activity both in terms of base pairs and in terms of events. This duplication acceleration within the ancestral species is significant when compared to lineage-specific rate estimates even after accounting for copy-number polymorphism and homoplasy. We discover striking examples of recurrent and independent gene-containing duplications within the gorilla and chimpanzee that are absent in the human lineage. Our results suggest that the evolutionary properties of copy-number mutation differ significantly from other forms of genetic mutation and, in contrast to the hominid slowdown of single-base-pair mutations, there has been a genomic burst of duplication activity at this period during human evolution.
Resumo:
Background: Kabuki syndrome (KS) is a multiple congenital anomaly syndrome characterized by specific facial features, mild to moderate mental retardation, postnatal growth delay, skeletal abnormalities, and unusual dermatoglyphic patterns with prominent fingertip pads. A 3.5 Mb duplication at 8p23.1-p22 was once reported as a specific alteration in KS but has not been confirmed in other patients. The molecular basis of KS remains unknown. Methods: We have studied 16 Spanish patients with a clinical diagnosis of KS or KS-like to search for genomic imbalances using genome-wide array technologies. All putative rearrangements were confirmed by FISH, microsatellite markers and/or MLPA assays, which also determined whether the imbalance was de novo or inherited. Results: No duplication at 8p23.1-p22 was observed in our patients. We detected complex rearrangements involving 2q in two patients with Kabuki-like features: 1) a de novo inverted duplication of 11 Mb with a 4.5 Mb terminal deletion, and 2) a de novo 7.2 Mb-terminal deletion in a patient with an additional de novo 0.5 Mb interstitial deletion in 16p. Additional copy number variations (CNV), either inherited or reported in normal controls, were identified and interpreted as polymorphic variants. No specific CNV was significantly increased in the KS group. Conclusion: Our results further confirmed that genomic duplications of 8p23 region are not a common cause of KS and failed to detect other recurrent rearrangement causing this disorder. The detection of two patients with 2q37 deletions suggests that there is a phenotypic overlap between the two conditions, and screening this region in the Kabuki-like patients should be considered.
Resumo:
RESUM Actualment la majoria de nosaltres sabem de l’existència de molts programes lliures, però hem tenir clar que lliure no vol dir sempre programa gratuït. Tot i que a vegades sí que pot ser-ho, es té en compte molt més que això: és una manera de pensar i entendre el programari i al llarg dels anys ha generat tot un moviment social. Considerem que un programa lliure és aquell que garanteix als usuaris la llibertat per executar, copiar, distribuir, estudiar, canviar i millorar el codi programat, com molt bé defineixen les seves llibertats bàsiques. El programari lliure el podem trobar funcionant en ordinadors personals, escoles, empreses diverses, administracions, etc. ja que la majoria de programes que utilitzen actualment, com hem vist, tenen el seu equivalent en lliure. El fet de si és viable que una empresa es passi a programari lliure, depèn ben bé del seu entorn, ja que en funció d’aquest li serà més o menys fàcil la migració. La finalitat d’aquest projecte és, primer de tot, fer un ampli estudi del món del programari lliure i del seu moviment social. S’ha fet una recerca de diferents aspectes dins del programari lliure per conèixer-lo a fons i després s’ha proposat una possible implantació d’aquest en un usuari domèstic i en una administració pública, tenint en compte tots els aspectes vistos en l’estudi, valorant si totes les idees que defensa i els beneficis que aporta són aplicables i viables en qualsevol persona i àmbit i el perquè. Com a conclusió principal en destacaria que tot i que el programa lliure disposa d’una ideologia que agrada i té uns programes tècnicament perfectes (sense que això sigui el seu objectiu principal), penso que encara hi ha molt camí per recórrer quant a una migració en grans entorns, ja que per exemple en un ajuntament una migració total és encara difícil (tot i que no impossible perquè n’hi ha que s’hi han migrat). A l’apartat d’annexos s’hi inclou un glossari amb un seguit de terminologies amb paraules que no tothom pot saber i s’ha cregut oportú incloure-les en aquest apartat. La primera vegada que apareix alguna d’aquestes paraules la podem trobar senyalitzada amb un *.
Resumo:
The article presents the requirements of written text to be read on the web, as well as the general recommendations for editing text aimed at facilitating its reading. After introducing the overview of reading of written text, techniques, and strategies, the author turns to factors that condition reading on the web, as well as other specifics. Based on these elements, a series of recommendations is offered for good practices for writing correctly for the web, organised by the three basic components of written text: style or tone, structure, and presentation (orthographic and typographic corrections). Also introduced are tools that can be useful for improving text editing in a corporate environment