914 resultados para Saturated throughput
Resumo:
A semi-automated, immunomagneticcapture-reverse transcription PCR(IMC-RT-PCR) assay for the detection of three pineapple-infecting ampeloviruses, Pineapple mealybug wilt-associated virus-1, -2 and -3, is described. The assay was equivalent in sensitivity but more rapid than conventional immunocapture RT-PCR. The assay can be used either as a one- or two-step RT-PCR and allows detection of the viruses separately or together in a triplex assay from fresh, frozen or freeze-dried pineapple leaf tissue. This IMC-RT-PCR assay could be used for high throughput screening of pineapple planting propagules and could easily be modified for the detection of other RNA viruses in a range of plant species, provided suitable antibodies are available.
Resumo:
Marker ordering during linkage map construction is a critical component of QTL mapping research. In recent years, high-throughput genotyping methods have become widely used, and these methods may generate hundreds of markers for a single mapping population. This poses problems for linkage analysis software because the number of possible marker orders increases exponentially as the number of markers increases. In this paper, we tested the accuracy of linkage analyses on simulated recombinant inbred line data using the commonly used Map Manager QTX (Manly et al. 2001: Mammalian Genome 12, 930-932) software and RECORD (Van Os et al. 2005: Theoretical and Applied Genetics 112, 30-40). Accuracy was measured by calculating two scores: % correct marker positions, and a novel, weighted rank-based score derived from the sum of absolute values of true minus observed marker ranks divided by the total number of markers. The accuracy of maps generated using Map Manager QTX was considerably lower than those generated using RECORD. Differences in linkage maps were often observed when marker ordering was performed several times using the identical dataset. In order to test the effect of reducing marker numbers on the stability of marker order, we pruned marker datasets focusing on regions consisting of tightly linked clusters of markers, which included redundant markers. Marker pruning improved the accuracy and stability of linkage maps because a single unambiguous marker order was produced that was consistent across replications of analysis. Marker pruning was also applied to a real barley mapping population and QTL analysis was performed using different map versions produced by the different programs. While some QTLs were identified with both map versions, there were large differences in QTL mapping results. Differences included maximum LOD and R-2 values at QTL peaks and map positions, thus highlighting the importance of marker order for QTL mapping
Resumo:
Analytical solutions of partial differential equation (PDE) models describing reactive transport phenomena in saturated porous media are often used as screening tools to provide insight into contaminant fate and transport processes. While many practical modelling scenarios involve spatially variable coefficients, such as spatially variable flow velocity, v(x), or spatially variable decay rate, k(x), most analytical models deal with constant coefficients. Here we present a framework for constructing exact solutions of PDE models of reactive transport. Our approach is relevant for advection-dominant problems, and is based on a regular perturbation technique. We present a description of the solution technique for a range of one-dimensional scenarios involving constant and variable coefficients, and we show that the solutions compare well with numerical approximations. Our general approach applies to a range of initial conditions and various forms of v(x) and k(x). Instead of simply documenting specific solutions for particular cases, we present a symbolic worksheet, as supplementary material, which enables the solution to be evaluated for different choices of the initial condition, v(x) and k(x). We also discuss how the technique generalizes to apply to models of coupled multispecies reactive transport as well as higher dimensional problems.
Resumo:
Instead of waiting for the acknowledgments from all the copies of a single data block sent, as in the optimum generalised stop-and-wait ARQ scheme, the transmitter in the proposed scheme starts sending an optimum number of copies of the next block in the queue, soon after receiving the positive acknowledgment from the receiver, thereby further improving the throughput efficiency.
Critical Evaluation of Determining Swelling Pressure by Swell-Load Method and Constant Volume Method
Resumo:
For any construction activity in expansive soils, determination of swelling pressure/heave is an essential step. Though many attempts have been made to develop laboratory procedures by using the laboratory one-dimensional oedometer to determine swelling pressure of expansive soils, they are reported to yield varying results. The main reason for these variations could be heterogeneous moisture distribution of the sample over its thickness. To overcome this variation the experimental procedure should be such that the soil gets fully saturated. Attempts were made to introduce vertical sand drains in addition to the top and bottom drains. In this study five and nine vertical sand drains were introduced to experimentally find out the variations in the swell and swelling pressure. The variations in the moisture content at middle, top, and bottom of the sample in the oedometer test are also reported. It is found that swell-load method is better as compared to zero-swell method. Further, five number of vertical sand drains are found to be sufficient to obtain uniform moisture content distribution.
Resumo:
The basic goal of a proteomic microchip is to achieve efficient and sensitive high throughput protein analyses, automatically carrying out several measurements in parallel. A protein microchip would either detect a single protein or a large set of proteins for diagnostic purposes, basic proteome or functional analysis. Such analyses would include e.g. interactomics, general protein expression studies, detecting structural alterations or secondary modifications. Visualization of the results may occur by simple immunoreactions, general or specific labelling, or mass spectrometry. For this purpose we have manufactured chip-based proteome analysis devices that utilize the classical polymer gel electrophoresis technology to run one and two-dimensional gel electrophoresis separations of proteins in just a smaller size. In total, we manufactured three functional prototypes of which one performed a miniaturized one-dimensional gel electrophoresis (1-DE) separation, the second and third preformed two-dimensional gel electrophoresis (2-DE) separations. These microchips were successfully used to separate and characterize a set of predefined standard proteins, cell and tissue samples. Also, the miniaturized 2-DE (ComPress-2DE) chip presents a novel way of combining the 1st and 2nd dimensional separations, thus avoiding manual handling of the gels, eliminate cross-contamination, and make analyses faster and repeatability better. They all showed the advantages of miniaturization over the commercial devices; such as fast analysis, low sample- and reagent consumption, high sensitivity, high repeatability and inexpensive performance. All these instruments have the potential to be fully automated due to their easy-to-use set-up.
Resumo:
The central nervous system (CNS) is the most cholesterol-rich organ in the body. Cholesterol is essential to CNS functions such as synaptogenesis and formation of myelin. Significant differences exist in cholesterol metabolism between the CNS and the peripheral organs. However, the regulation of cholesterol metabolism in the CNS is poorly understood compared to our knowledge of the regulation of cholesterol homeostasis in organs reached by cholesterol-carrying lipoprotein particles in the circulation. Defects in CNS cholesterol homeostasis have been linked to a variety of neurodegenerative diseases, including common diseases with complex pathogenetic mechanisms such as Alzheimer s disease. In spite of intense effort, the mechanisms which link disturbed cholesterol homeostasis to these diseases remain elusive. We used three inherited recessive neurodegenerative disorders as models in the studies included in this thesis: Niemann-Pick type C (NPC), infantile neuronal ceroid lipofuscinosis and cathepsin D deficiency. Of these three, NPC has previously been linked to disturbed intracellular cholesterol metabolism. Elucidating the mechanisms with which disturbances of cholesterol homeostasis link to neurodegeneration in recessive inherited disorders with known genetic lesions should shed light on how cholesterol is handled in the healthy CNS and help to understand how these and more complex diseases develop. In the first study we analyzed the synthesis of sterols and the assembly and secretion of lipoprotein particles in Npc1 deficient primary astrocytes. We found that both wild type and Npc1 deficient astrocytes retain significant amounts of desmosterol and other cholesterol precursor sterols as membrane constituents. No difference was observed in the synthesis of sterols and the secretion of newly synthesized sterols between Npc1 wild type, heterozygote or knockout astrocytes. We found that the incorporation of newly synthesized sterols into secreted lipoprotein particles was not inhibited by Npc1 mutation, and the lipoprotein particles were similar to those excreted by wild type astrocytes in shape and size. The bulk of cholesterol was found to be secreted independently of secreted NPC2. These observations demonstrate the ability of Npc1 deficient astrocytes to handle de novo sterols, and highlight the unique sterol composition in the developing brain. Infantile neuronal ceroid lipofuscinosis is caused by the deficiency of a functional Ppt1 enzyme in the cells. In the second study, global gene expression studies of approximately 14000 mouse genes showed significant changes in the expression of 135 genes in Ppt1 deficient neurons compared to wild type. Several genes encoding for enzymes of the mevalonate pathway of cholesterol biosynthesis showed increased expression. As predicted by the expression data, sterol biosynthesis was found to be upregulated in the knockout neurons. These data link Ppt1 deficiency to disturbed cholesterol metabolism in CNS neurons. In the third study we investigated the effect of cathepsin D deficiency on the structure of myelin and lipid homeostasis in the brain. Our proteomics data, immunohistochemistry and western blotting data showed altered levels of the myelin protein components myelin basic protein, proteolipid protein and 2 , 3 -cyclic nucleotide 3 phosphodiesterase in the brains of cathepsin D deficient mice. Electron microscopy revealed altered myelin structure in cathepsin D deficient brains. Additionally, plasmalogen-derived alkenyl chains and 20- and 24-carbon saturated and monounsaturated fatty acids typical for glycosphingolipids were found to be significantly reduced, but polyunsaturated species were significantly increased in the knockout brains, pointing to a decrease in white matter. The levels of ApoE and ABCA1 proteins linked to cholesterol efflux in the CNS were found to be altered in the brains of cathepsin D deficient mice, along with an accumulation of cholesteryl esters and a decrease in triglycerols. Together these data demonstrate altered myelin architecture in cathepsin D deficient mice and link cathepsin D deficiency to aberrant cholesterol metabolism and trafficking. Basic research into rare monogenic diseases sheds light on the underlying biological processes which are perturbed in these conditions and contributes to our understanding of the physiological function of healthy cells. Eventually, understanding gained from the study of disease models may contribute towards establishing treatment for these disorders and further our understanding of the pathogenesis of other, more complex and common diseases.
Resumo:
We review here classical Bogomolnyi bounds, and their generalisation to supersymmetric quantum field theories by Witten and Olive. We also summarise some recent work by several people on whether such bounds are saturated in the quantised theory.
Resumo:
This project aims to reduce production costs for high-quality pork through understanding how commercial processing conditions affect mill throughput, processing energy efficiency, product durability and the nutritional value of pig feed.
Resumo:
Background: The hot dog fold has been found in more than sixty proteins since the first report of its existence about a decade ago. The fold appears to have a strong association with fatty acid biosynthesis, its regulation and metabolism, as the proteins with this fold are predominantly coenzyme A-binding enzymes with a variety of substrates located at their active sites. Results: We have analyzed the structural features and sequences of proteins having the hot dog fold. This study reveals that though the basic architecture of the fold is well conserved in these proteins, significant differences exist in their sequence, nature of substrate and oligomerization. Segments with certain conserved sequence motifs seem to play crucial structural and functional roles in various classes of these proteins. Conclusion: The analysis led to predictions regarding the functional classification and identification of possible catalytic residues of a number of hot dog fold-containing hypothetical proteins whose structures were determined in high throughput structural genomics projects.
Resumo:
Analytical models of IEEE 802.11-based WLANs are invariably based on approximations, such as the well-known mean-field approximations proposed by Bianchi for saturated nodes. In this paper, we provide a new approach for modeling the situation when the nodes are not saturated. We study a State Dependent Attempt Rate (SDAR) approximation to model M queues (one queue per node) served by the CSMA/CA protocol as standardized in the IEEE 802.11 DCF. The approximation is that, when n of the M queues are non-empty, the attempt probability of the n non-empty nodes is given by the long-term attempt probability of n saturated nodes as provided by Bianchi's model. This yields a coupled queue system. When packets arrive to the M queues according to independent Poisson processes, we provide an exact model for the coupled queue system with SDAR service. The main contribution of this paper is to provide an analysis of the coupled queue process by studying a lower dimensional process and by introducing a certain conditional independence approximation. We show that the numerical results obtained from our finite buffer analysis are in excellent agreement with the corresponding results obtained from ns-2 simulations. We replace the CSMA/CA protocol as implemented in the ns-2 simulator with the SDAR service model to show that the SDAR approximation provides an accurate model for the CSMA/CA protocol. We also report the simulation speed-ups thus obtained by our model-based simulation.
Resumo:
In our earlier work ([1]) we proposed WLAN Manager (or WM) a centralised controller for QoS management of infrastructure WLANs based on the IEEE 802.11 DCF standards. The WM approach is based on queueing and scheduling packets in a device that sits between all traffic flowing between the APs and the wireline LAN, requires no changes to the AP or the STAs, and can be viewed as implementing a "Split-MAC" architecture. The objectives of WM were to manage various TCP performance related issues (such as the throughput "anomaly" when STAs associate with an AP with mixed PHY rates, and upload-download unfairness induced by finite AP buffers), and also to serve as the controller for VoIP admission control and handovers, and for other QoS management measures. In this paper we report our experiences in implementing the proposals in [1]: the insights gained, new control techniques developed, and the effectiveness of the WM approach in managing TCP performance in an infrastructure WLAN. We report results from a hybrid experiment where a physical WM manages actual TCP controlled packet flows between a server and clients, with the WLAN being simulated, and also from a small physical testbed with an actual AP.
Resumo:
Effect of lime:silica ratio on the kinetics of the reaction of silica with saturated lime has been investigated. Below C/S=0.65 the reaction does not proceed to completion and even in the presence of a large excess of silica only 90% lime is consumed. A parameter, lime reactivity index, has been defined to quantity the reactive silica present in rice husk ash. The product of the reaction between rice husk ash and saturated lime is a calcium hydrosilicate, C---S---H(I)**. The fibrilar structure and the hollow tubular morphology of the fibres of C---S---H, have been explained by a growth mechanism, where the driving force is osmotic pressure.
Resumo:
Type 2 diabetes is an increasing, serious, and costly public health problem. The increase in the prevalence of the disease can mainly be attributed to changing lifestyles leading to physical inactivity, overweight, and obesity. These lifestyle-related risk factors offer also a possibility for preventive interventions. Until recently, proper evidence regarding the prevention of type 2 diabetes has been virtually missing. To be cost-effective, intensive interventions to prevent type 2 diabetes should be directed to people at an increased risk of the disease. The aim of this series of studies was to investigate whether type 2 diabetes can be prevented by lifestyle intervention in high-risk individuals, and to develop a practical method to identify individuals who are at high risk of type 2 diabetes and would benefit from such an intervention. To study the effect of lifestyle intervention on diabetes risk, we recruited 522 volunteer, middle-aged (aged 40 - 64 at baseline), overweight (body mass index > 25 kg/m2) men (n = 172) and women (n = 350) with impaired glucose tolerance to the Diabetes Prevention Study (DPS). The participants were randomly allocated either to the intensive lifestyle intervention group or the control group. The control group received general dietary and exercise advice at baseline, and had annual physician's examination. The participants in the intervention group received, in addition, individualised dietary counselling by a nutritionist. They were also offered circuit-type resistance training sessions and were advised to increase overall physical activity. The intervention goals were to reduce body weight (5% or more reduction from baseline weight), limit dietary fat (< 30% of total energy consumed) and saturated fat (< 10% of total energy consumed), and to increase dietary fibre intake (15 g / 1000 kcal or more) and physical activity (≥ 30 minutes/day). Diabetes status was assessed annually by a repeated 75 g oral glucose tolerance testing. First analysis on end-points was completed after a mean follow-up of 3.2 years, and the intervention phase was terminated after a mean duration of 3.9 years. After that, the study participants continued to visit the study clinics for the annual examinations, for a mean of 3 years. The intervention group showed significantly greater improvement in each intervention goal. After 1 and 3 years, mean weight reductions were 4.5 and 3.5 kg in the intervention group and 1.0 kg and 0.9 kg in the control group. Cardiovascular risk factors improved more in the intervention group. After a mean follow-up of 3.2 years, the risk of diabetes was reduced by 58% in the intervention group compared with the control group. The reduction in the incidence of diabetes was directly associated with achieved lifestyle goals. Furthermore, those who consumed moderate-fat, high-fibre diet achieved the largest weight reduction and, even after adjustment for weight reduction, the lowest diabetes risk during the intervention period. After discontinuation of the counselling, the differences in lifestyle variables between the groups still remained favourable for the intervention group. During the post-intervention follow-up period of 3 years, the risk of diabetes was still 36% lower among the former intervention group participants, compared with the former control group participants. To develop a simple screening tool to identify individuals who are at high risk of type 2 diabetes, follow-up data of two population-based cohorts of 35-64 year old men and women was used. The National FINRISK Study 1987 cohort (model development data) included 4435 subjects, with 182 new drug-treated cases of diabetes identified during ten years, and the FINRISK Study 1992 cohort (model validation data) included 4615 subjects, with 67 new cases of drug-treated diabetes during five years, ascertained using the Social Insurance Institution's Drug register. Baseline age, body mass index, waist circumference, history of antihypertensive drug treatment and high blood glucose, physical activity and daily consumption of fruits, berries or vegetables were selected into the risk score as categorical variables. In the 1987 cohort the optimal cut-off point of the risk score identified 78% of those who got diabetes during the follow-up (= sensitivity of the test) and 77% of those who remained free of diabetes (= specificity of the test). In the 1992 cohort the risk score performed equally well. The final Finnish Diabetes Risk Score (FINDRISC) form includes, in addition to the predictors of the model, a question about family history of diabetes and the age category of over 64 years. When applied to the DPS population, the baseline FINDRISC value was associated with diabetes risk among the control group participants only, indicating that the intensive lifestyle intervention given to the intervention group participants abolished the diabetes risk associated with baseline risk factors. In conclusion, the intensive lifestyle intervention produced long-term beneficial changes in diet, physical activity, body weight, and cardiovascular risk factors, and reduced diabetes risk. Furthermore, the effects of the intervention were sustained after the intervention was discontinued. The FINDRISC proved to be a simple, fast, inexpensive, non-invasive, and reliable tool to identify individuals at high risk of type 2 diabetes. The use of FINDRISC to identify high-risk subjects, followed by lifestyle intervention, provides a feasible scheme in preventing type 2 diabetes, which could be implemented in the primary health care system.
Resumo:
We study sensor networks with energy harvesting nodes. The generated energy at a node can be stored in a buffer. A sensor node periodically senses a random field and generates a packet. These packets are stored in a queue and transmitted using the energy available at that time at the node. For such networks we develop efficient energy management policies. First, for a single node, we obtain policies that are throughput optimal, i.e., the data queue stays stable for the largest possible data rate. Next we obtain energy management policies which minimize the mean delay in the queue. We also compare performance of several easily implementable suboptimal policies. A greedy policy is identified which, in low SNR regime, is throughput optimal and also minimizes mean delay. Next using the results for a single node, we develop efficient MAC policies.