31 resultados para Transaction throughput
em CentAUR: Central Archive University of Reading - UK
Resumo:
We have designed and implemented a low-cost digital system using closed-circuit television cameras coupled to a digital acquisition system for the recording of in vivo behavioral data in rodents and for allowing observation and recording of more than 10 animals simultaneously at a reduced cost, as compared with commercially available solutions. This system has been validated using two experimental rodent models: one involving chemically induced seizures and one assessing appetite and feeding. We present observational results showing comparable or improved levels of accuracy and observer consistency between this new system and traditional methods in these experimental models, discuss advantages of the presented system over conventional analog systems and commercially available digital systems, and propose possible extensions to the system and applications to nonrodent studies.
Resumo:
The basic premise of transaction-cost theory is that the decision to outsource, rather than to undertake work in-house, is determined by the relative costs incurred in each of these forms of economic organization. In construction the "make or buy" decision invariably leads to a contract. Reducing the costs of entering into a contractual relationship (transaction costs) raises the value of production and is therefore desirable. Commonly applied methods of contractor selection may not minimise the costs of contracting. Research evidence suggests that although competitive tendering typically results in the lowest bidder winning the contract this may not represent the lowest project cost after completion. Multi-parameter and quantitative models for contractor selection have been developed to identify the best (or least risky) among bidders. A major area in which research is still needed is in investigating the impact of different methods of contractor selection on the costs of entering into a contract and the decision to outsource.
Resumo:
The tagged microarray marker (TAM) method allows high-throughput differentiation between predicted alternative PCR products. Typically, the method is used as a molecular marker approach to determining the allelic states of single nucleotide polymorphisms (SNPs) or insertion-deletion (indel) alleles at genomic loci in multiple individuals. Biotin-labeled PCR products are spotted, unpurified, onto a streptavidin-coated glass slide and the alternative products are differentiated by hybridization to fluorescent detector oligonucleotides that recognize corresponding allele-specific tags on the PCR primers. The main attractions of this method are its high throughput (thousands of PCRs are analyzed per slide), flexibility of scoring (any combination, from a single marker in thousands of samples to thousands of markers in a single sample, can be analyzed) and flexibility of scale (any experimental scale, from a small lab setting up to a large project). This protocol describes an experiment involving 3,072 PCRs scored on a slide. The whole process from the start of PCR setup to receiving the data spreadsheet takes 2 d.
Resumo:
BACKGROUND: In order to maintain the most comprehensive structural annotation databases we must carry out regular updates for each proteome using the latest profile-profile fold recognition methods. The ability to carry out these updates on demand is necessary to keep pace with the regular updates of sequence and structure databases. Providing the highest quality structural models requires the most intensive profile-profile fold recognition methods running with the very latest available sequence databases and fold libraries. However, running these methods on such a regular basis for every sequenced proteome requires large amounts of processing power.In this paper we describe and benchmark the JYDE (Job Yield Distribution Environment) system, which is a meta-scheduler designed to work above cluster schedulers, such as Sun Grid Engine (SGE) or Condor. We demonstrate the ability of JYDE to distribute the load of genomic-scale fold recognition across multiple independent Grid domains. We use the most recent profile-profile version of our mGenTHREADER software in order to annotate the latest version of the Human proteome against the latest sequence and structure databases in as short a time as possible. RESULTS: We show that our JYDE system is able to scale to large numbers of intensive fold recognition jobs running across several independent computer clusters. Using our JYDE system we have been able to annotate 99.9% of the protein sequences within the Human proteome in less than 24 hours, by harnessing over 500 CPUs from 3 independent Grid domains. CONCLUSION: This study clearly demonstrates the feasibility of carrying out on demand high quality structural annotations for the proteomes of major eukaryotic organisms. Specifically, we have shown that it is now possible to provide complete regular updates of profile-profile based fold recognition models for entire eukaryotic proteomes, through the use of Grid middleware such as JYDE.
Resumo:
It has become evident that the mystery of life will not be deciphered just by decoding its blueprint, the genetic code. In the life and biomedical sciences, research efforts are now shifting from pure gene analysis to the analysis of all biomolecules involved in the machinery of life. One area of these postgenomic research fields is proteomics. Although proteomics, which basically encompasses the analysis of proteins, is not a new concept, it is far from being a research field that can rely on routine and large-scale analyses. At the time the term proteomics was coined, a gold-rush mentality was created, promising vast and quick riches (i.e., solutions to the immensely complex questions of life and disease). Predictably, the reality has been quite different. The complexity of proteomes and the wide variations in the abundances and chemical properties of their constituents has rendered the use of systematic analytical approaches only partially successful, and biologically meaningful results have been slow to arrive. However, to learn more about how cells and, hence, life works, it is essential to understand the proteins and their complex interactions in their native environment. This is why proteomics will be an important part of the biomedical sciences for the foreseeable future. Therefore, any advances in providing the tools that make protein analysis a more routine and large-scale business, ideally using automated and rapid analytical procedures, are highly sought after. This review will provide some basics, thoughts and ideas on the exploitation of matrix-assisted laser desorption/ ionization in biological mass spectrometry - one of the most commonly used analytical tools in proteomics - for high-throughput analyses.
High throughput, high resolution selection of polymorphic microsatellite loci for multiplex analysis
Resumo:
Background Large-scale genetic profiling, mapping and genetic association studies require access to a series of well-characterised and polymorphic microsatellite markers with distinct and broad allele ranges. Selection of complementary microsatellite markers with non-overlapping allele ranges has historically proved to be a bottleneck in the development of multiplex microsatellite assays. The characterisation process for each microsatellite locus can be laborious and costly given the need for numerous, locus-specific fluorescent primers. Results Here, we describe a simple and inexpensive approach to select useful microsatellite markers. The system is based on the pooling of multiple unlabelled PCR amplicons and their subsequent ligation into a standard cloning vector. A second round of amplification utilising generic labelled primers targeting the vector and unlabelled locus-specific primers targeting the microsatellite flanking region yield allelic profiles that are representative of all individuals contained within the pool. Suitability of various DNA pool sizes was then tested for this purpose. DNA template pools containing between 8 and 96 individuals were assessed for the determination of allele ranges of individual microsatellite markers across a broad population. This helped resolve the balance between using pools that are large enough to allow the detection of many alleles against the risk of including too many individuals in a pool such that rare alleles are over-diluted and so do not appear in the pooled microsatellite profile. Pools of DNA from 12 individuals allowed the reliable detection of all alleles present in the pool. Conclusion The use of generic vector-specific fluorescent primers and unlabelled locus-specific primers provides a high resolution, rapid and inexpensive approach for the selection of highly polymorphic microsatellite loci that possess non-overlapping allele ranges for use in large-scale multiplex assays.
Resumo:
It has become evident that the mystery of life will not be deciphered just by decoding its blueprint, the genetic code. In the life and biomedical sciences, research efforts are now shifting from pure gene analysis to the analysis of all biomolecules involved in the machinery of life. One area of these postgenomic research fields is proteomics. Although proteomics, which basically encompasses the analysis of proteins, is not a new concept, it is far from being a research field that can rely on routine and large-scale analyses. At the time the term proteomics was coined, a gold-rush mentality was created, promising vast and quick riches (i.e., solutions to the immensely complex questions of life and disease). Predictably, the reality has been quite different. The complexity of proteomes and the wide variations in the abundances and chemical properties of their constituents has rendered the use of systematic analytical approaches only partially successful, and biologically meaningful results have been slow to arrive. However, to learn more about how cells and, hence, life works, it is essential to understand the proteins and their complex interactions in their native environment. This is why proteomics will be an important part of the biomedical sciences for the foreseeable future. Therefore, any advances in providing the tools that make protein analysis a more routine and large-scale business, ideally using automated and rapid analytical procedures, are highly sought after. This review will provide some basics, thoughts and ideas on the exploitation of matrix-assisted laser desorption/ionization in biological mass spectrometry - one of the most commonly used analytical tools in proteomics - for high-throughput analyses.
Resumo:
We have designed and implemented a low-cost digital system using closed-circuit television cameras coupled to a digital acquisition system for the recording of in vivo behavioral data in rodents and for allowing observation and recording of more than 10 animals simultaneously at a reduced cost, as compared with commercially available solutions. This system has been validated using two experimental rodent models: one involving chemically induced seizures and one assessing appetite and feeding. We present observational results showing comparable or improved levels of accuracy and observer consistency between this new system and traditional methods in these experimental models, discuss advantages of the presented system over conventional analog systems and commercially available digital systems, and propose possible extensions to the system and applications to non-rodent studies.
Resumo:
The popularity of wireless local area networks (WLANs) has resulted in their dense deployments around the world. While this increases capacity and coverage, the problem of increased interference can severely degrade the performance of WLANs. However, the impact of interference on throughput in dense WLANs with multiple access points (APs) has had very limited prior research. This is believed to be due to 1) the inaccurate assumption that throughput is always a monotonically decreasing function of interference and 2) the prohibitively high complexity of an accurate analytical model. In this work, firstly we provide a useful classification of commonly found interference scenarios. Secondly, we investigate the impact of interference on throughput for each class based on an approach that determines the possibility of parallel transmissions. Extensive packet-level simulations using OPNET have been performed to support the observations made. Interestingly, results have shown that in some topologies, increased interference can lead to higher throughput and vice versa.
Resumo:
Quadrature Phase Shift Keying (QPSK) and Dual Carrier Modulation (DCM) are currently used as the modulation schemes for Multiband Orthogonal Frequency Division Multiplexing (MB-OFDM) in the ECMA-368 defined Ultra-Wideband (UWB) radio platform. ECMA-368 has been chosen as the physical radio platform for many systems including Wireless USB (W-USB), Bluetooth 3.0 and Wireless HDMI; hence ECMA-368 is an important issue to consumer electronics and the users experience of these products. To enable the transport of high-rate USB, ECMA-368 offers up to 480 Mb/s instantaneous bit rate to the Medium Access Control (MAC) layer, but depending on radio channel conditions dropped packets unfortunately result in a lower throughput. This paper presents an alternative high data rate modulation scheme that fits within the configuration of the current standard increasing system throughput by achieving 600 Mb/s (reliable to 3.1 meters) thus maintaining the high rate USB throughput even with a moderate level of dropped packets. The modulation system is termed Dual Circular 32-QAM (DC 32-QAM). The system performance for DC 32-QAM modulation is presented and compared with 16-QAM and DCM1.
Resumo:
A rapid thiolytic degradation and cleanup procedure was developed for analyzing tannins directly in chlorophyll-containing sainfoin (Onobrychis viciifolia) plants. The technique proved suitable for complex tannin mixtures containing catechin, epicatechin, gallocatechin, and epigallocatechin flavan-3-ol units. The reaction time was standardized at 60 min to minimize the loss of structural information as a result of epimerization and degradation of terminal flavan-3-ol units. The results were evaluated by separate analysis of extractable and unextractable tannins, which accounted for 63.6−113.7% of the in situ plant tannins. It is of note that 70% aqueous acetone extracted tannins with a lower mean degree of polymerization (mDP) than was found for tannins analyzed in situ. Extractable tannins had between 4 and 29 lower mDP values. The method was validated by comparing results from individual and mixed sample sets. The tannin composition of different sainfoin accessions covered a range of mDP values from 16 to 83, procyanidin/prodelphinidin (PC/PD) ratios from 19.2/80.8 to 45.6/54.4, and cis/trans ratios from 74.1/25.9 to 88.0/12.0. This is the first high-throughput screening method that is suitable for analyzing condensed tannin contents and structural composition directly in green plant tissue.