920 resultados para segmental duplication
Resumo:
Spink, S., Urquhart, C., Cox, A. & Higher Education Academy - Information and Computer Sciences Subject Centre. (2007). Procurement of electronic content across the UK National Health Service and Higher Education sectors. Report to JISC executive and LKDN executive. Sponsorship: JISC/LKDN
Resumo:
Urquhart, C. J., Cox, A. M.& Spink, S. (2007). Collaboration on procurement of e-content between the National Health Service and higher education in the UK. Interlending & Document Supply, 35(3), 164-170. Sponsorship: JISC, LKDN
Resumo:
Riley, M. C., Clare, A., King, R. D. (2007). Locational distribution of gene functional classes in Arabidopsis thaliana. BMC Bioinformatics 8, Article No: 112 Sponsorship: EPSRC / RAEng
Resumo:
Wydział Biologii: Instytut Biologii Molekularnej i Biotechnologii
Resumo:
We analyzed the logs of our departmental HTTP server http://cs-www.bu.edu as well as the logs of the more popular Rolling Stones HTTP server http://www.stones.com. These servers have very different purposes; the former caters primarily to local clients, whereas the latter caters exclusively to remote clients all over the world. In both cases, our analysis showed that remote HTTP accesses were confined to a very small subset of documents. Using a validated analytical model of server popularity and file access profiles, we show that by disseminating the most popular documents on servers (proxies) closer to the clients, network traffic could be reduced considerably, while server loads are balanced. We argue that this process could be generalized so as to provide for an automated demand-based duplication of documents. We believe that such server-based information dissemination protocols will be more effective at reducing both network bandwidth and document retrieval times than client-based caching protocols [2].
Resumo:
The CIL compiler for core Standard ML compiles whole programs using a novel typed intermediate language (TIL) with intersection and union types and flow labels on both terms and types. The CIL term representation duplicates portions of the program where intersection types are introduced and union types are eliminated. This duplication makes it easier to represent type information and to introduce customized data representations. However, duplication incurs compile-time space costs that are potentially much greater than are incurred in TILs employing type-level abstraction or quantification. In this paper, we present empirical data on the compile-time space costs of using CIL as an intermediate language. The data shows that these costs can be made tractable by using sufficiently fine-grained flow analyses together with standard hash-consing techniques. The data also suggests that non-duplicating formulations of intersection (and union) types would not achieve significantly better space complexity.
Resumo:
We revisit the problem of connection management for reliable transport. At one extreme, a pure soft-state (SS) approach (as in Delta-t [9]) safely removes the state of a connection at the sender and receiver once the state timers expire without the need for explicit removal messages. And new connections are established without an explicit handshaking phase. On the other hand, a hybrid hard-state/soft-state (HS+SS) approach (as in TCP) uses both explicit handshaking as well as timer-based management of the connection’s state. In this paper, we consider the worst-case scenario of reliable single-message communication, and develop a common analytical model that can be instantiated to capture either the SS approach or the HS+SS approach. We compare the two approaches in terms of goodput, message and state overhead. We also use simulations to compare against other approaches, and evaluate them in terms of correctness (with respect to data loss and duplication) and robustness to bad network conditions (high message loss rate and variable channel delays). Our results show that the SS approach is more robust, and has lower message overhead. On the other hand, SS requires more memory to keep connection states, which reduces goodput. Given memories are getting bigger and cheaper, SS presents the best choice over bandwidth-constrained, error-prone networks.
Resumo:
Hidden State Shape Models (HSSMs) [2], a variant of Hidden Markov Models (HMMs) [9], were proposed to detect shape classes of variable structure in cluttered images. In this paper, we formulate a probabilistic framework for HSSMs which provides two major improvements in comparison to the previous method [2]. First, while the method in [2] required the scale of the object to be passed as an input, the method proposed here estimates the scale of the object automatically. This is achieved by introducing a new term for the observation probability that is based on a object-clutter feature model. Second, a segmental HMM [6, 8] is applied to model the "duration probability" of each HMM state, which is learned from the shape statistics in a training set and helps obtain meaningful registration results. Using a segmental HMM provides a principled way to model dependencies between the scales of different parts of the object. In object localization experiments on a dataset of real hand images, the proposed method significantly outperforms the method of [2], reducing the incorrect localization rate from 40% to 15%. The improvement in accuracy becomes more significant if we consider that the method proposed here is scale-independent, whereas the method of [2] takes as input the scale of the object we want to localize.
Resumo:
A method is presented for converting unstructured program schemas to strictly equivalent structured form. The predicates of the original schema are left intact with structuring being achieved by the duplication of he original decision vertices without the introduction of compound predicate expressions, or where possible by function duplication alone. It is shown that structured schemas must have at least as many decision vertices as the original unstructured schema, and must have more when the original schema contains branches out of decision constructs. The structuring method allows the complete avoidance of function duplication, but only at the expense of decision vertex duplication. It is shown that structured schemas have greater space-time requirements in general than their equivalent optimal unstructured counterparts and at best have the same requirements.
Resumo:
Aim: Diabetes is an important barometer of health system performance. This chronic condition is a source of significant morbidity, premature mortality and a major contributor to health care costs. There is an increasing focus internationally, and more recently nationally, on system, practice and professional-level initiatives to promote the quality of care. The aim of this thesis was to investigate the ‘quality chasm’ around the organisation and delivery of diabetes care in general practice, to explore GPs’ attitudes to engaging in quality improvement activities and to examine efforts to improve the quality of diabetes care in Ireland from practice to policy. Methods: Quantitative and qualitative methods were used. As part of a mixed methods sequential design, a postal survey of 600 GPs was conducted to assess the organization of care. This was followed by an in-depth qualitative study using semi-structured interviews with a purposive sample of 31 GPs from urban and rural areas. The qualitative methodology was also used to examine GPs’ attitudes to engaging in quality improvement. Data were analysed using a Framework approach. A 2nd observation study was used to assess the quality of care in 63 practices with a special interest in diabetes. Data on 3010 adults with Type 2 diabetes from 3 primary care initiatives were analysed and the results were benchmarked against national guidelines and standards of care in the UK. The final study was an instrumental case study of policy formulation. Semi-structured interviews were conducted with 15 members of the Expert Advisory Group (EAG) for Diabetes. Thematic analysis was applied to the data using 3 theories of the policy process as analytical tools. Results: The survey response rate was 44% (n=262). Results suggested care delivery was largely unstructured; 45% of GPs had a diabetes register (n=157), 53% reported using guidelines (n=140), 30% had formal call recall system (n=78) and 24% had none of these organizational features (n=62). Only 10% of GPs had a formal shared protocol with the local hospital specialist diabetes team (n=26). The lack of coordination between settings was identified as a major barrier to providing optimal care leading to waiting times, overburdened hospitals and avoidable duplication. The lack of remuneration for chronic disease management had a ripple effect also creating costs for patients and apathy among GPs. There was also a sense of inertia around quality improvement activities particularly at a national level. This attitude was strongly influenced by previous experiences of change in the health system. In contrast GP’s spoke positively about change at a local level which was facilitated by a practice ethos, leadership and special interest in diabetes. The 2nd quantitative study found that practices with a special interest in diabetes achieved a standard of care comparable to the UK in terms of the recording of clinical processes of care and the achievement of clinical targets; 35% of patients reached the HbA1c target of <6.5% compared to 26% in England and Wales. With regard to diabetes policy formulation, the evolving process of action and inaction was best described by the Multiple Streams Theory. Within the EAG, the formulation of recommendations was facilitated by overarching agreement on the “obvious” priorities while the details of proposals were influenced by personal preferences and local capacity. In contrast the national decision-making process was protracted and ambiguous. The lack of impetus from senior management coupled with the lack of power conferred on the EAG impeded progress. Conclusions: The findings highlight the inconsistency of diabetes care in Ireland. The main barriers to optimal diabetes management center on the organization and coordination of care at the systems level with consequences for practice, providers and patients. Quality improvement initiatives need to stimulate a sense of ownership and interest among frontline service providers to address the local sense of inertia to national change. To date quality improvement in diabetes care has been largely dependent the “special interest” of professionals. The challenge for the Irish health system is to embed this activity as part of routine practice, professional responsibility and the underlying health care culture.
Resumo:
Internal tandem duplication of FMS-like receptor tyrosine kinase (FLT3-ITD) has been associated with an aggressive AML phenotype. FLT3-ITD expressing cell lines have been shown to generate increased levels of reactive oxygen species (ROS) and DNA double strand breaks (dsbs). However, the molecular basis of how FLT3-ITD-driven ROS leads to the aggressive form of AML is not clearly understood. Herein, we observe that the majority of H2O2 in FLT3-ITD-expressing MV4-11 cells colocalises to the endoplasmic reticulum (ER). Furthermore, ER localisation of ROS in MV4-11 cells corresponds to the localisation of p22phox, a small membrane-bound subunit of NOX complex. Furthermore, we show that 32D cells, a myeloblast-like cell line transfected with FLT3-ITD, possess higher steady protein levels of p22phox than their wild type FLT3 (FLT3-WT)-expressing counterparts. Moreover, the inhibition of FLT3-ITD, using various FLT3 tyrosine kinase inhibitors, uniformly results in a posttranslational downregulation of p22phox. We also show that depletion of NOX2 and NOX4 and p22phox, but not NOX1 proteins causes a reduction in endogenous H2O2 levels. We show that genomic instability induced by FLT3-ITD leads to an increase in nuclear levels of H2O2. The presence of H2O2 in the nucleus is largely reduced by inhibition of FLT3-ITD or NOX. Furthermore, similar results are also observed following siRNA knockdowns of p22phox or NOX4. We demonstrate that 32D cells transfected with FLT3-ITD have a higher level of DNA damage than 32D cells transfected with FLT3-WT. Additionally, inhibition of FLT3-ITD, p22phox and NOX knockdowns decrease the number of DNA dsbs. In summary, this study presents a novel mechanism of genomic instability generation in FLT3-ITD-expressing AML cells, whereby FLT3-ITD activates NOX complexes by stabilising p22phox. This in turn leads to elevated generation of ROS and DNA damage in these cells.
Resumo:
The task of nanofabrication can, in principle, be divided into two separate tracks: generation and replication of the patterned features. These two tracks are different in terms of characteristics, requirements, and aspects of emphasis. In general, generation of patterns is commonly achieved in a serial fashion using techniques that are typically slow, making this process only practical for making a small number of copies. Only when combined with a rapid duplication technique will fabrication at high-throughput and low-cost become feasible. Nanoskiving is unique in that it can be used for both generation and duplication of patterned nanostructures.
Resumo:
We present the analysis of twenty human genomes to evaluate the prospects for identifying rare functional variants that contribute to a phenotype of interest. We sequenced at high coverage ten "case" genomes from individuals with severe hemophilia A and ten "control" genomes. We summarize the number of genetic variants emerging from a study of this magnitude, and provide a proof of concept for the identification of rare and highly-penetrant functional variants by confirming that the cause of hemophilia A is easily recognizable in this data set. We also show that the number of novel single nucleotide variants (SNVs) discovered per genome seems to stabilize at about 144,000 new variants per genome, after the first 15 individuals have been sequenced. Finally, we find that, on average, each genome carries 165 homozygous protein-truncating or stop loss variants in genes representing a diverse set of pathways.
Resumo:
Alzheimer's disease is a complex and progressive neurodegenerative disease leading to loss of memory, cognitive impairment, and ultimately death. To date, six large-scale genome-wide association studies have been conducted to identify SNPs that influence disease predisposition. These studies have confirmed the well-known APOE epsilon4 risk allele, identified a novel variant that influences disease risk within the APOE epsilon4 population, found a SNP that modifies the age of disease onset, as well as reported the first sex-linked susceptibility variant. Here we report a genome-wide scan of Alzheimer's disease in a set of 331 cases and 368 controls, extending analyses for the first time to include assessments of copy number variation. In this analysis, no new SNPs show genome-wide significance. We also screened for effects of copy number variation, and while nothing was significant, a duplication in CHRNA7 appears interesting enough to warrant further investigation.
Resumo:
The adrenergic receptors (ARs) (subtypes alpha 1, alpha 2, beta 1, and beta 2) are a prototypic family of guanine nucleotide binding regulatory protein-coupled receptors that mediate the physiological effects of the hormone epinephrine and the neurotransmitter norepinephrine. We have previously assigned the genes for beta 2- and alpha 2-AR to human chromosomes 5 and 10, respectively. By Southern analysis of somatic cell hybrids and in situ chromosomal hybridization, we have now mapped the alpha 1-AR gene to chromosome 5q32----q34, the same position as beta 2-AR, and the beta 1-AR gene to chromosome 10q24----q26, the region where alpha 2-AR is located. In mouse, both alpha 2- and beta 1-AR genes were assigned to chromosome 19, and the alpha 1-AR locus was localized to chromosome 11. Pulsed field gel electrophoresis has shown that the alpha 1- and beta 2-AR genes in humans are within 300 kilobases (kb) and the distance between the alpha 2- and beta 1-AR genes is less than 225 kb. The proximity of these two pairs of AR genes and the sequence similarity that exists among all the ARs strongly suggest that they are evolutionarily related. Moreover, they likely arose from a common ancestral receptor gene and subsequently diverged through gene duplication and chromosomal duplication to perform their distinctive roles in mediating the physiological effects of catecholamines. The AR genes thus provide a paradigm for understanding the evolution of such structurally conserved yet functionally divergent families of receptor molecules.