939 resultados para sparse coding
Resumo:
T cell activation is a complex process involving many steps and the role played by the non-protein-coding RNAs (ncRNAs) in this phenomenon is still unclear. The non-coding T cells transcript (NTT) is differentially expressed during human T cells activation, but its function is unknown. Here, we detected a 426 m NTT transcript by RT-PCR using RNA of human lymphocytes activated with a synthetic peptide of HIV-1. After cloning, the sense and antisense 426 nt NTT transcripts were obtained by in vitro transcription and were sequenced. We found that both transcripts are highly structured and are able to activate PKR. A striking observation was that the antisense 426 nt NTT transcript is significantly more effective in activating PKR than the corresponding sense transcript. The transcription factor NF-kappa B is activated by PKR through phosphorylation and subsequent degradation of its inhibitor I-kappa B beta. We also found that the antisense 426 nt NTT transcript induces more efficiently the degradation Of I-kappa B beta than the sense transcript. Thus, this study suggests that the role played by NTT in the activation of lymphocytes can be mediated by PKR through NF-kappa B activation. However, the physiological significance of the activity of the antisense 426 nt NTT transcript remains unknown. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
Using NONMEM, the population pharmacokinetics of perhexiline were studied in 88 patients (34 F, 54 M) who were being treated for refractory angina. Their mean +/- SD (range) age was 75 +/- 9.9 years (46-92), and the length of perhexiline treatment was 56 +/- 77 weeks (0.3-416). The sampling time after a dose was 14.1 +/- 21.4 hours (0.5-200), and the perhexiline plasma concentrations were 0.39 +/- 0.32 mg/L (0.03-1.56). A one-compartment model with first-order absorption was fitted to the data using the first-order (FO) approximation. The best model contained 2 subpopulations (obtained via the $MIXTURE subroutine) of 77 subjects (subgroup A) and 11 subjects (subgroup B) that had typical values for clearance (CL/F) of 21.8 L/h and 2.06 L/h, respectively. The volumes of distribution (V/F) were 1470 L and 260 L, respectively, which suggested a reduction in presystemic metabolism in subgroup B. The interindividual variability (CV%) was modeled logarithmically and for CL/F ranged from 69.1% (subgroup A) to 86.3% (subgroup B). The interindividual variability in V/F was 111%. The residual variability unexplained by the population model was 28.2%. These results confirm and extend the existing pharmacokinetic data on perhexiline, especially the bimodal distribution of CL/F manifested via an inherited deficiency in hepatic and extrahepatic CYP2D6 activity.
Resumo:
Around 98% of all transcriptional output in humans is noncoding RNA. RNA-mediated gene regulation is widespread in higher eukaryotes and complex genetic phenomena like RNA interference, co-suppression, transgene silencing, imprinting, methylation, and possibly position-effect variegation and transvection, all involve intersecting pathways based on or connected to RNA signaling. I suggest that the central dogma is incomplete, and that intronic and other non-coding RNAs have evolved to comprise a second tier of gene expression in eukaryotes, which enables the integration and networking of complex suites of gene activity. Although proteins are the fundamental effectors of cellular function, the basis of eukaryotic complexity and phenotypic variation may lie primarily in a control architecture composed of a highly parallel system of trans-acting RNAs that relay state information required for the coordination and modulation of gene expression, via chromatin remodeling, RNA-DNA, RNA-RNA and RNA-protein interactions. This system has interesting and perhaps informative analogies with small world networks and dataflow computing.
Resumo:
The male hypermethylated (MHM) region, located near the middle of the short arm of the Z chromosome of chickens, consists of approximately 210 tandem repeats of a BamHI 2.2-kb sequence unit. Cytosines of the CpG dinucleotides of this region are extensively methylated on the two Z chromosomes in the male but much less methylated on the single Z chromosome in the female. The state of methylation of the MHM region is established after fertilization by about the 1-day embryonic stage. The MHM region is transcribed only in the female from the particular strand into heterogeneous, high molecular-mass, non-coding RNA, which is accumulated at the site of transcription, adjacent to the DMRT1 locus, in the nucleus. The transcriptional silence of the MHM region in the male is most likely caused by the CpG methylation, since treatment of the male embryonic fibroblasts with 5-azacytidine results in hypo-methylation and active transcription of this region. In ZZW triploid chickens, MHM regions are hypomethylated and transcribed on the two Z chromosomes, whereas MHM regions are hypermethylated and transcriptionally inactive on the three Z chromosomes in ZZZ triploid chickens, suggesting a possible role of the W chromosome on the state of the MHM region.
Resumo:
A major limitation in any high-performance digital communication system is the linearity region of the transmitting amplifier. Nonlinearities typically lead to signal clipping. Efficient communication in such conditions requires maintaining a low peak-to-average power ratio (PAR) in the transmitted signal while achieving a high throughput of data. Excessive PAR leads either to frequent clipping or to inadequate resolution in the analog-to-digital or digital-to-analog converters. Currently proposed signaling schemes for future generation wireless communications suffer from a high PAR. This paper presents a new signaling scheme for channels with clipping which achieves a PAR as low as 3. For a given linear range in the transmitter's digital-to-analog converter, this scheme achieves a lower bit-error rate than existing multicarrier schemes, owing to increased separation between constellation points. We present the theoretical basis for this new scheme, approximations for the expected bit-error rate, and simulation results. (C) 2002 Elsevier Science (USA).
Resumo:
The Lanczos algorithm is appreciated in many situations due to its speed. and economy of storage. However, the advantage that the Lanczos basis vectors need not be kept is lost when the algorithm is used to compute the action of a matrix function on a vector. Either the basis vectors need to be kept, or the Lanczos process needs to be applied twice. In this study we describe an augmented Lanczos algorithm to compute a dot product relative to a function of a large sparse symmetric matrix, without keeping the basis vectors.
Resumo:
A plasmid DNA directing transcription of the infectious full-length RNA genome of Kunjin (KUN) virus in vivo from a mammalian expression promoter was used to vaccinate mice intramuscularly. The KUN viral cDNA encoded in the plasmid contained the mutation in the NS1 protein (Pro-250 to Leu) previously shown to attenuate KUN virus in weanling mice. KUN virus was isolated from the blood of immunized mice 3-4 days after DNA inoculation, demonstrating that infectious RNA was being transcribed in vivo; however, no symptoms of virus-induced disease were observed. By 19 days postimmunization, neutralizing antibody was detected in the serum of immunized animals. On challenge with lethal doses of the virulent New York strain of West Nile (WN) or wild-type KUN virus intracerebrally or intraperitoneally, mice immunized with as little as 0.1-1 mug of KUN plasmid DNA were solidly protected against disease. This finding correlated with neutralization data in vitro showing that serum from KUN DNA-immunized mice neutralized KUN and WN,viruses with similar efficiencies. The results demonstrate that delivery of an attenuated but replicating KUN virus via a plasmid DNA vector may provide an effective vaccination strategy against virulent strains of WN virus.
Resumo:
Objective: To develop a 'quality use of medicines' coding system for the assessment of pharmacists' medication reviews and to apply it to an appropriate cohort. Method: A 'quality use of medicines' coding system was developed based on findings in the literature. These codes were then applied to 216 (111 intervention, 105 control) veterans' medication profiles by an independent clinical pharmacist who was supported by a clinical pharmacologist with the aim to assess the appropriateness of pharmacy interventions. The profiles were provided for veterans participating in a randomised, controlled trial in private hospitals evaluating the effect of medication review and discharge counselling. The reliability of the coding was tested by two independent clinical pharmacists in a random sample of 23 veterans from the study population. Main outcome measure: Interrater reliability was assessed by applying Cohen's kappa score on aggregated codes. Results: The coding system based on the literature consisted of 19 codes. The results from the three clinical pharmacists suggested that the original coding system had two major problems: (a) a lack of discrimination for certain recommendations e. g. adverse drug reactions, toxicity and mortality may be seen as variations in degree of a single effect and (b) certain codes e. g. essential therapy were in low prevalence. The interrater reliability for an aggregation of all codes into positive, negative and clinically non-significant codes ranged from 0.49-0.58 (good to fair). The interrater reliability increased to 0.72-0.79 (excellent) when all negative codes were excluded. Analysis of the sample of 216 profiles showed that the most prevalent recommendations from the clinical pharmacists were a positive impact in reducing adverse responses (31.9%), an improvement in good clinical pharmacy practice (25.5%) and a positive impact in reducing drug toxicity (11.1%). Most medications were assigned the clinically non-significant code (96.6%). In fact, the interventions led to a statistically significant difference in pharmacist recommendations in the categories; adverse response, toxicity and good clinical pharmacy practice measured by the quality use of medicine coding system. Conclusion: It was possible to use the quality use of medicine coding system to rate the quality and potential health impact of pharmacists' medication reviews, and the system did pick up differences between intervention and control patients. The interrater reliability for the summarised coding system was fair, but a larger sample of medication regimens is needed to assess the non-summarised quality use of medicines coding system.
Resumo:
In this work, we consider the numerical solution of a large eigenvalue problem resulting from a finite rank discretization of an integral operator. We are interested in computing a few eigenpairs, with an iterative method, so a matrix representation that allows for fast matrix-vector products is required. Hierarchical matrices are appropriate for this setting, and also provide cheap LU decompositions required in the spectral transformation technique. We illustrate the use of freely available software tools to address the problem, in particular SLEPc for the eigensolvers and HLib for the construction of H-matrices. The numerical tests are performed using an astrophysics application. Results show the benefits of the data-sparse representation compared to standard storage schemes, in terms of computational cost as well as memory requirements.
Resumo:
The Wyner-Ziv video coding (WZVC) rate distortion performance is highly dependent on the quality of the side information, an estimation of the original frame, created at the decoder. This paper, characterizes the WZVC efficiency when motion compensated frame interpolation (MCFI) techniques are used to generate the side information, a difficult problem in WZVC especially because the decoder only has available some reference decoded frames. The proposed WZVC compression efficiency rate model relates the power spectral of the estimation error to the accuracy of the MCFI motion field. Then, some interesting conclusions may be derived related to the impact of the motion field smoothness and the correlation to the true motion trajectories on the compression performance.
Resumo:
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.