978 resultados para non-coding RNAs (ncRNAs)
Resumo:
Abstract - Recently, long noncoding RNAs have emerged as pivotal molecules for the regulation of coding genes' expression. These molecules might result from antisense transcription of functional genes originating natural antisense transcripts (NATs) or from transcriptional active pseudogenes. TBCA interacts with β-tubulin and is involved in the folding and dimerization of new tubulin heterodimers, the building blocks of microtubules. Methodology/Principal findings: We found that the mouse genome contains two structurally distinct Tbca genes located in chromosomes 13 (Tbca13) and 16 (Tbca16). Interestingly, the two Tbca genes albeit ubiquitously expressed, present differential expression during mouse testis maturation. In fact, as testis maturation progresses Tbca13 mRNA levels increase progressively, while Tbca16 mRNA levels decrease. This suggests a regulatory mechanism between the two genes and prompted us to investigate the presence of the two proteins. However, using tandem mass spectrometry we were unable to identify the TBCA16 protein in testis extracts even in those corresponding to the maturation step with the highest levels of Tbca16 transcripts. These puzzling results led us to re-analyze the expression of Tbca16. We then detected that Tbca16 transcription produces sense and natural antisense transcripts. Strikingly, the specific depletion by RNAi of these transcripts leads to an increase of Tbca13 transcript levels in a mouse spermatocyte cell line. Conclusions/Significance: Our results demonstrate that Tbca13 mRNA levels are post-transcriptionally regulated by the sense and natural antisense Tbca16 mRNA levels. We propose that this regulatory mechanism operates during spermatogenesis, a process that involves microtubule rearrangements, the assembly of specific microtubule structures and requires critical TBCA levels.
Resumo:
OBJECTIVE: To investigate the quality of life, life satisfaction, happiness and demands of work in workers with different work schedules. METHODS: The survey was carried out on professional workers in social care. Some were shiftworkers whose schedule included night shifts (N=311), some were shiftworkers without night shifts (N=207) and some were non-shiftworkers (N=1,210). Surveys were mailed and the response rate was 86%. For the purpose of this study several variables were selected from the Survey: The Quality of Life Profile, which measures importance, satisfaction, control and opportunities in nine domains of life plus measures of happiness, life satisfaction and demands of work. RESULTS: While both groups of shiftworkers, compared to non-shiftworkers, reported needing more physical effort to complete their work, and reported 'being' more physically tired, no differences were found in reports of overall happiness, life satisfaction or total quality of life. However, night-shiftworkers reported greater percentage of time unhappy than the other two groups of workers. In analyses of the quality of life, night-shiftworkers were less satisfied with domains of spiritual 'being' and physical and community 'belonging' than day-shiftworkers and non-shiftworkers. They also reported having fewer opportunities to improve their physical 'being', leisure, and personal growth than the other two groups. CONCLUSIONS: Quality of life in specific domains in night-shiftworkers was rated worse than in other groups of workers. Domain-based quality of life assessment gives more information regarding the particular needs of workers than overall or global measures of well-being.
Resumo:
The purpose of this paper was to introduce the symbolic formalism based on kneading theory, which allows us to study the renormalization of non-autonomous periodic dynamical systems.
Resumo:
Small firms are a major player in development. Thus, entrepreneurship is frequently attached to these rms and it must be present in daily management of factors such as planning and cooperation. We intend to analyze these factors, comparing familiar and non-familiar businesses. This study was conducted in a Portuguese region in the north of Portugal - Vale do Sousa . The results allow us to conclude that even with some managerial di erences it was not possible to identify distinct patterns between them. The main goal of this paper is to open research lines on important issues to distinguish familiar from non-familiar businesses.
Resumo:
The goal of the present paper is to analyse the classic entrepreneurship strategies (Innovation, Risk and Proactivity) in small and medium-sized businesses. However as presented in the title, the study will go further by comparing the results of those strategies in familiar and nonfamiliar businesses. This study was carried on in construction and industry sectors, in the region of Vale do Sousa, in the north of Portugal. In order to classify businesses as familiar or non-familiar types two criterion were adopted: (1) Management Control, (2) Family Employability. On the opposite to some studies that present a larger percentage of familiar businesses in national and European entrepreneurial fabric, the criterion used leaded to a larger number of non-familiar businesses (53%). The results showed that in general SMEs in this region are not following entrepreneurship strategies. Analysing the entire sample without a separation of businesses by nature (familiar/non-familiar) only proactivity showed to be more present in the managerial decisions. There is a lack of innovation and risk culture. Comparing the groups only on proactivity tests was possible to verify some differences. It was concluded that non-familiar businesses are more proactive than familiar ones. Between those groups there are no statistical differences on the means of the variables innovation and risk. At the same time some tests were conducted to test the differences on the variable entrepreneurship. The results were similar to innovation and risk strategies: There are no significant differences on entrepreneurship between these groups of businesses.
Resumo:
A novel high throughput and scalable unified architecture for the computation of the transform operations in video codecs for advanced standards is presented in this paper. This structure can be used as a hardware accelerator in modern embedded systems to efficiently compute all the two-dimensional 4 x 4 and 2 x 2 transforms of the H.264/AVC standard. Moreover, its highly flexible design and hardware efficiency allows it to be easily scaled in terms of performance and hardware cost to meet the specific requirements of any given video coding application. Experimental results obtained using a Xilinx Virtex-5 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which presents a throughput per unit of area relatively higher than other similar recently published designs targeting the H.264/AVC standard. Such results also showed that, when integrated in a multi-core embedded system, this architecture provides speedup factors of about 120x concerning pure software implementations of the transform algorithms, therefore allowing the computation, in real-time, of all the above mentioned transforms for Ultra High Definition Video (UHDV) sequences (4,320 x 7,680 @ 30 fps).
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
European Cetacean Society Conference Workshop, Galway, Ireland, 25th March 2012.
Resumo:
In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.
The use of non-standard CT conversion ramps for Monte Carlo verification of 6 MV prostate IMRT plans
Resumo:
Monte Carlo (MC) dose calculation algorithms have been widely used to verify the accuracy of intensity-modulated radiotherapy (IMRT) dose distributions computed by conventional algorithms due to the ability to precisely account for the effects of tissue inhomogeneities and multileaf collimator characteristics. Both algorithms present, however, a particular difference in terms of dose calculation and report. Whereas dose from conventional methods is traditionally computed and reported as the water-equivalent dose (Dw), MC dose algorithms calculate and report dose to medium (Dm). In order to compare consistently both methods, the conversion of MC Dm into Dw is therefore necessary. This study aims to assess the effect of applying the conversion of MC-based Dm distributions to Dw for prostate IMRT plans generated for 6 MV photon beams. MC phantoms were created from the patient CT images using three different ramps to convert CT numbers into material and mass density: a conventional four material ramp (CTCREATE) and two simplified CT conversion ramps: (1) air and water with variable densities and (2) air and water with unit density. MC simulations were performed using the BEAMnrc code for the treatment head simulation and the DOSXYZnrc code for the patient dose calculation. The conversion of Dm to Dw by scaling with the stopping power ratios of water to medium was also performed in a post-MC calculation process. The comparison of MC dose distributions calculated in conventional and simplified (water with variable densities) phantoms showed that the effect of material composition on dose-volume histograms (DVH) was less than 1% for soft tissue and about 2.5% near and inside bone structures. The effect of material density on DVH was less than 1% for all tissues through the comparison of MC distributions performed in the two simplified phantoms considering water. Additionally, MC dose distributions were compared with the predictions from an Eclipse treatment planning system (TPS), which employed a pencil beam convolution (PBC) algorithm with Modified Batho Power Law heterogeneity correction. Eclipse PBC and MC calculations (conventional and simplified phantoms) agreed well (<1%) for soft tissues. For femoral heads, differences up to 3% were observed between the DVH for Eclipse PBC and MC calculated in conventional phantoms. The use of the CT conversion ramp of water with variable densities for MC simulations showed no dose discrepancies (0.5%) with the PBC algorithm. Moreover, converting Dm to Dw using mass stopping power ratios resulted in a significant shift (up to 6%) in the DVH for the femoral heads compared to the Eclipse PBC one. Our results show that, for prostate IMRT plans delivered with 6 MV photon beams, no conversion of MC dose from medium to water using stopping power ratio is needed. In contrast, MC dose calculations using water with variable density may be a simple way to solve the problem found using the dose conversion method based on the stopping power ratio.
Resumo:
A 9.9 kb DNA fragment from the right arm of chromosome VII of Saccharomyces cerevisiae has been sequenced and analysed. The sequence contains four open reading frames (ORFs) longer than 100 amino acids. One gene, PFK1, has already been cloned and sequenced and the other one is the probable yeast gene coding for the beta-subunit of the succinyl-CoA synthetase. The two remaining ORFs share homology with the deduced amino acid sequence (and their physical arrangement is similar to that) of the YHR161c and YHR162w ORFs from chromosome VIII.
Resumo:
Non-suicidal self-injury (NSSI) is the deliberate, self-inflicted destruction of body tissue without suicidal intent and an important clinical phenomenon. Rates of NSSI appear to be disproportionately high in adolescents and young adults, and is a risk factor for suicidal ideation and behavior. The present study reports the psychometric properties of the Impulse, Self-harm and Suicide Ideation Questionnaire for Adolescents (ISSIQ-A), a measure designed to comprehensively assess the impulsivity, NSSI behaviors and suicide ideation. An additional module of this questionnaire assesses the functions of NSSI. Results of Confirmatory Factor Analysis (CFA) of the scale on 1722 youths showed items' suitability and confirmed a model of four different dimensions (Impulse, Self-harm, Risk-behavior and Suicide ideation) with good fit and validity. Further analysis showed that youth׳s engagement in self-harm may exert two different functions: to create or alleviate emotional states, and to influence social relationships. Our findings contribute to research and assessment on non-suicidal self-injury, suggesting that the ISSIQ-A is a valid and reliable measure to assess impulse, self-harm and suicidal thoughts, in adolescence.
Resumo:
Conferência: 39th Annual Conference of the IEEE Industrial-Electronics-Society (IECON) - NOV 10-14, 2013