820 resultados para efficient causation
A high efficient and consistent method for harvesting large volumes of high-titre lentiviral vectors
Resumo:
Lentiviral vectors pseudotyped with vesicular stomatitis virus glycoprotein (VSV-G) are emerging as the vectors of choice for in vitro and in vivo gene therapy studies. However, the current method for harvesting lentivectors relies upon ultracentrifugation at 50 000 g for 2 h. At this ultra-high speed, rotors currently in use generally have small volume capacity. Therefore, preparations of large volumes of high-titre vectors are time-consuming and laborious to perform. In the present study, viral vector supernatant harvests from vector-producing cells (VPCs) were pre-treated with various amounts of poly-L-lysine (PLL) and concentrated by low speed centrifugation. Optimal conditions were established when 0.005% of PLL (w/v) was added to vector supernatant harvests, followed by incubation for 30 min and centrifugation at 10 000 g for 2 h at 4 degreesC. Direct comparison with ultracentrifugation demonstrated that the new method consistently produced larger volumes (6 ml) of high-titre viral vector at 1 x 10(8) transduction unit (TU)/ml (from about 3000 ml of supernatant) in one round of concentration. Electron microscopic analysis showed that PLL/viral vector formed complexes, which probably facilitated easy precipitation at low-speed concentration (10 000 g), a speed which does not usually precipitate viral particles efficiently. Transfection of several cell lines in vitro and transduction in vivo in the liver with the lentivector/PLL complexes demonstrated efficient gene transfer without any significant signs of toxicity. These results suggest that the new method provides a convenient means for harvesting large volumes of high-titre lentivectors, facilitate gene therapy experiments in large animal or human gene therapy trials, in which large amounts of lentiviral vectors are a prerequisite.
Resumo:
A new strategy has been developed for the rapid synthesis of peptide para-nitroanilides (pNA). The method involves derivatization of commercially available tritylchloride resin (TCP-resin) with 1,4-phenylenediamine, subsequent coupling with desired amino acids by the standard Fmoc protocol, and oxidation of the intermediate para-aminoanilides (pAA) with Oxone(R). This procedure allows easy assembly of the desired para-aminoanilides (pAA) on standard resin and efficient oxidation and purification of the corresponding para-nitroanilides (pNA). The method allows easy access to any desired peptide para-nitroanilides, which are useful substrates for the characterization and study of proteolytic enzymes.
Resumo:
The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.
Resumo:
We prove that the simple group L-3(5) which has order 372000 is efficient by providing an efficient presentation for it. This leaves one simple group with order less than one million, S-4(4) which has order 979200, whose efficiency or otherwise remains to be determined.
Resumo:
An efficient Lanczos subspace method has been devised for calculating state-to-state reaction probabilities. The method recasts the time-independent wave packet Lippmann-Schwinger equation [Kouri , Chem. Phys. Lett. 203, 166 (1993)] inside a tridiagonal (Lanczos) representation in which action of the causal Green's operator is affected easily with a QR algorithm. The method is designed to yield all state-to-state reaction probabilities from a given reactant-channel wave packet using a single Lanczos subspace; the spectral properties of the tridiagonal Hamiltonian allow calculations to be undertaken at arbitrary energies within the spectral range of the initial wave packet. The method is applied to a H+O-2 system (J=0), and the results indicate the approach is accurate and stable. (C) 2002 American Institute of Physics.
Resumo:
The Lanczos algorithm is appreciated in many situations due to its speed. and economy of storage. However, the advantage that the Lanczos basis vectors need not be kept is lost when the algorithm is used to compute the action of a matrix function on a vector. Either the basis vectors need to be kept, or the Lanczos process needs to be applied twice. In this study we describe an augmented Lanczos algorithm to compute a dot product relative to a function of a large sparse symmetric matrix, without keeping the basis vectors.
Resumo:
A more efficient classifying cyclone (CC) for fine particle classification has been developed in recent years at the JKMRC. The novel CC, known as the JKCC, has modified profiles of the cyclone body, vortex finder, and spigot when compared to conventional hydrocyclones. The novel design increases the centrifugal force inside the cyclone and mitigates the short circuiting flow that exists in all current cyclones. It also decreases the probability of particle contamination in the place near the cyclone spigot. Consequently the cyclone efficiency is improved while the unit maintains a simple structure. An international patent has been granted for this novel cyclone design. In the first development stage-a feasibility study-a 100 mm JKCC was tested and compared with two 100 min commercial units. Very encouraging results were achieved, indicating good potential for the novel design. In the second development stage-a scale-up stage-the JKCC was scaled up to 200 mm in diameter, and its geometry was optimized through numerous tests. The performance of the JKCC was compared with a 150 nun commercial unit and exhibited sharper separation, finer separation size, and lower flow ratios. The JKCC is now being scaled up into a fill-size (480 mm) hydrocyclone in the third development stage-an industrial study. The 480 mm diameter unit will be tested in an Australian coal preparation plant, and directly compared with a commercial CC operating under the same conditions. Classifying cyclone performance for fine coal could be further improved if the unit is installed in an inclined position. The study using the 200 mm JKCC has revealed that sharpness of separation improved and the flow ratio to underflow was decreased by 43% as the cyclone inclination was varied from the vertical position (0degrees) to the horizontal position (90degrees). The separation size was not affected, although the feed rate was slightly decreased. To ensure self-emptying upon shutdown, it is recommended that the JKCC be installed at an inclination of 75-80degrees. At this angle the cyclone performance is very similar to that at a horizontal position. Similar findings have been derived from the testing of a conventional hydrocyclone. This may be of benefit to operations that require improved performance from their classifying cyclones in terms of sharpness of separation and flow ratio, while tolerating slightly reduced feed rate.
Resumo:
Background: In the presence of dNTPs, intact HIV-1 virions are capable of reverse transcribing at least part of their genome, a process known as natural endogenous reverse transcription (NERT). PCR analysis of virion DNA produced by NERT revealed that the first strand transfer reaction (1stST) was inefficient in intact virions, with minus strand (-) strong stop DNA (ssDNA) copy numbers up to 200 times higher than post-1stST products measured using primers in U3 and U5. This was in marked contrast to the efficiency of 1stST observed in single-round cell infection assays, in which (-) ssDNA and U3-U5 copy numbers were indistinguishable. Objectives: To investigate the reasons for the discrepancy in first strand transfer efficiency between intact cell-free virus and the infection process. Study design: Alterations of both NERT reactions and the conditions of cell infection were used to test whether uncoating and/or entry play a role in the discrepancy in first strand transfer efficiency. Results and Conclusions: The difference in 1stST efficiency could not be attributed simply to viral uncoating, since addition of very low concentrations of detergent to NERT reactions removed the viral envelope without disrupting the reverse transcription complex, and these conditions resulted in no improvement in 1stST efficiency. Virus pseudotyped with surface glycoproteins from either vesicular stomatitis virus or amphotrophic murine leukaemia virus also showed low levels of 1stST in low detergent NERT assays and equivalent levels of (-) ssDNA and 1stST in single-round infections of cells, demonstrating that the gp120-mediated infection process did not select for virions capable of carrying out 1stST. These data indicate that a post-entry event or factor may be involved in efficient HIV-1 reverse transcription in vivo. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Under certain conditions, cross-sectional analysis of cross-twin intertrait correlations can provide important information about the direction of causation (DOC) between two variables. A community-based sample of Australian female twins aged 18 to 45 years was mailed an extensive Health and Lifestyle Questionnaire (HLQ) that covered a wide range of personality and behavioral measures. Included were self-report measures of recent psychological distress and perceived childhood environment (PBI). Factor analysis of the PBI yielded three interpretable dimensions: Coldness, Overprotection, and Autonomy. Univariate analysis revealed that parental Overprotection and Autonomy were best explained by additive genetic, shared, and nonshared environmental effects (ACE), whereas the best-fitting model for PBI Coldness and the three measures of psychological distress (Depression, Phobic Anxiety, and Somatic Distress) included only additive genetic and nonshared environmental effects (AE). A common pathway model best explained the covariation between (1) the three PBI dimensions and (2) the three measures of psychological distress. DOC modeling between latent constructs of parenting and psychological distress revealed that a model which specified recollected parental behavior as the cause of psychological distress provided a better fit than a model which specified psychological distress as the cause of recollected parental behavior. Power analyses and limitations of the findings are discussed.
Resumo:
This paper presents experimental results of the communication performance evaluation of a prototype ZigBee-based patient monitoring system commissioned in an in-patient floor of a Portuguese hospital (HPG – Hospital Privado de Guimar~aes). Besides, it revisits relevant problems that affect the performance of nonbeacon-enabled ZigBee networks. Initially, the presence of hidden-nodes and the impact of sensor node mobility are discussed. It was observed, for instance, that the message delivery ratio in a star network consisting of six wireless electrocardiogram sensor devices may decrease from 100% when no hidden-nodes are present to 83.96% when half of the sensor devices are unable to detect the transmissions made by the other half. An additional aspect which affects the communication reliability is a deadlock condition that can occur if routers are unable to process incoming packets during the backoff part of the CSMA-CA mechanism. A simple approach to increase the message delivery ratio in this case is proposed and its effectiveness is verified. The discussion and results presented in this paper aim to contribute to the design of efficient networks,and are valid to other scenarios and environments rather than hospitals.
Resumo:
Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.
Resumo:
Object-oriented programming languages presently are the dominant paradigm of application development (e. g., Java,. NET). Lately, increasingly more Java applications have long (or very long) execution times and manipulate large amounts of data/information, gaining relevance in fields related with e-Science (with Grid and Cloud computing). Significant examples include Chemistry, Computational Biology and Bio-informatics, with many available Java-based APIs (e. g., Neobio). Often, when the execution of such an application is terminated abruptly because of a failure (regardless of the cause being a hardware of software fault, lack of available resources, etc.), all of its work already performed is simply lost, and when the application is later re-initiated, it has to restart all its work from scratch, wasting resources and time, while also being prone to another failure and may delay its completion with no deadline guarantees. Our proposed solution to address these issues is through incorporating mechanisms for checkpointing and migration in a JVM. These make applications more robust and flexible by being able to move to other nodes, without any intervention from the programmer. This article provides a solution to Java applications with long execution times, by extending a JVM (Jikes research virtual machine) with such mechanisms. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
O documento em anexo encontra-se na versão post-print (versão corrigida pelo editor).
Resumo:
This paper proposes a computationally efficient methodology for the optimal location and sizing of static and switched shunt capacitors in large distribution systems. The problem is formulated as the maximization of the savings produced by the reduction in energy losses and the avoided costs due to investment deferral in the expansion of the network. The proposed method selects the nodes to be compensated, as well as the optimal capacitor ratings and their operational characteristics, i.e. fixed or switched. After an appropriate linearization, the optimization problem was formulated as a large-scale mixed-integer linear problem, suitable for being solved by means of a widespread commercial package. Results of the proposed optimizing method are compared with another recent methodology reported in the literature using two test cases: a 15-bus and a 33-bus distribution network. For the both cases tested, the proposed methodology delivers better solutions indicated by higher loss savings, which are achieved with lower amounts of capacitive compensation. The proposed method has also been applied for compensating to an actual large distribution network served by AES-Venezuela in the metropolitan area of Caracas. A convergence time of about 4 seconds after 22298 iterations demonstrates the ability of the proposed methodology for efficiently handling large-scale compensation problems.