972 resultados para Non-optimal Codon


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Translational pausing may occur due to a number of mechanisms, including the presence of non-optimal codons, and it is thought to play a role in the folding of specific polypeptide domains during translation and in the facilitation of signal peptide recognition during see-dependent protein targeting. In this whole genome analysis of Escherichia coli we have found that non-optimal codons in the signal peptide-encoding sequences of secretory genes are overrepresented relative to the mature portions of these genes; this is in addition to their overrepresentation in the 5'-regions of genes encoding non-secretory proteins. We also find increased non-optimal codon usage at the 3' ends of most E. coli genes, in both non-secretory and secretory sequences. Whereas presumptive translational pausing at the 5' and 3' ends of E. coli messenger RNAs may clearly have a general role in translation, we suggest that it also has a specific role in sec-dependent protein export, possibly in facilitating signal peptide recognition. This finding may have important implications for our understanding of how the majority of non-cytoplasmic proteins are targeted, a process that is essential to all biological cells. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dans le contexte des soins intensifs pédiatriques, la douleur est une préoccupation majeure et quotidienne pour l'équipe soignante. Les patients sont vulnérables de par leur condition et les multiples stimulations douloureuses auxquelles ils sont exposés. En dépit des données probantes démontrant l'importance d'une évaluation de la douleur, le traitement de celle-ci reste non optimal dans cette population vulnérable. Une douleur inadéquatement traitée peut causer de nombreuses complications à moyen et à long terme. L'évaluation de la douleur chez le patient ventilé et non communicant représente un défi important, rendu complexe par l'utilisation concomitante de médicaments sédatifs et analgésiques qui peuvent masquer ou altérer les comportements de la douleur. Il existe des outils d'évaluation adaptés à cette population vulnérable, mais l'interprétation des scores reste difficile. L'évaluation de la douleur demande aux infirmières des connaissances et compétences élevées à même d'être mobilisées lors d'un processus complexe lié au raisonnement clinique. Le but de cette étude descriptive et observationnelle est de déterminer les indicateurs utilisés par les infirmières expertes des soins intensifs de pédiatrie lors de l'évaluation de la douleur chez le patient ventilé et non communicant, sous analgésie et sédation. Un échantillon de convenance de dix infirmières expertes travaillant dans une unité de soins intensifs pédiatriques d'un hôpital universitaire de Suisse romande a participé à cette étude. Les données ont été récoltées par l'enregistrement de la verbalisation du raisonnement clinique au lit du patient, complété par une observation non participante et un entretien semi-structuré. Le développement d'un cadre théorique constitué d'un modèle de raisonnement clinique expert et d'une modélisation du décodage de la douleur a permis de réaliser une analyse de contenu des enregistrements. Les résultats montrent une utilisation importante des indicateurs physiologiques, en lien avec la stabilité clinique du patient qui est un critère essentiel pour la prise de décision lors de la gestion de la douleur. La difficulté à discriminer l'agitation résultant de la douleur ou d'autres causes est également omniprésente. Les expertes anticipent et préviennent la survenue de la douleur en s'appuyant sur leurs connaissances et les situations de patient déjà rencontrées. Le contexte clinique influence de manière prépondérante le raisonnement clinique et les indicateurs utilisés lors de l'évaluation et la gestion de la douleur. Celle-ci doit être évaluée de manière combinée avec la stabilité clinique du patient et son niveau de sédation. De futures recherches sont nécessaires pour confirmer ces résultats réalisés avec un échantillon de petite taille et un devis observationnel.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Optimal challenge occurs when an individual perceives the challenge of the task to be equaled or matched by his or her own skill level (Csikszentmihalyi, 1990). The purpose of this study was to test the impact of the OPTIMAL model on physical education students' motivation and perceptions of optimal challenge across four games categories (i. e. target, batting/fielding, net/wall, invasion). Enjoyment, competence, student goal orientation and activity level were examined in relation to the OPTIMAL model. A total of 22 (17 M; 5 F) students and their parents provided informed consent to take part in the study and were taught four OPTIMAL lessons and four non-OPTIMAL lessons ranging across the four different games categories by their own teacher. All students completed the Task and Ego in Sport Questionnaire (TEOSQ; Duda & Whitehead, 1998), the Intrinsic Motivation Inventory (IMI; McAuley, Duncan, & Tanmien, 1987) and the Children's Perception of Optimal Challenge Instrument (CPOCI; Mandigo, 2001). Sixteen students (two each lesson) were observed by using the System for Observing Fitness Instruction Time tool (SOFTT; McKenzie, 2002). As well, they participated in a structured interview which took place after each lesson was completed. Quantitative results concluded that no overall significant difference was found in motivational outcomes when comparing OPTIMAL and non-OPTIMAL lessons. However, when the lessons were broken down into games categories, significant differences emerged. Levels of perceived competence were found to be higher in non-OPTIMAL batting/fielding lessons compared to OPTIMAL lessons, whereas levels of enjoyment and perceived competence were found to be higher in OPTIMAL invasion lessons in comparison to non-OPTIMAL invasion lessons. Qualitative results revealed significance in feehngs of skill/challenge balance, enjoyment and competence in the OPTIMAL lessons. Moreover, a significance of practically twice the active movement time percentage was found in OPTIMAL lessons in comparison to non-OPTIMAL lessons.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To connect different electrical, network and data devices with the minimum cost and shortest path, is a complex job. In huge buildings, where the devices are placed at different locations on different floors and only some specific routes are available to pass the cables and buses, the shortest path search becomes more complex. The aim of this thesis project is, to develop an application which indentifies the best path to connect all objects or devices by following the specific routes.To address the above issue we adopted three algorithms Greedy Algorithm, Simulated Annealing and Exhaustive search and analyzed their results. The given problem is similar to Travelling Salesman Problem. Exhaustive search is a best algorithm to solve this problem as it checks each and every possibility and give the accurate result but it is an impractical solution because of huge time consumption. If no. of objects increased from 12 it takes hours to search the shortest path. Simulated annealing is emerged with some promising results with lower time cost. As of probabilistic nature, Simulated annealing could be non optimal but it gives a near optimal solution in a reasonable duration. Greedy algorithm is not a good choice for this problem. So, simulated annealing is proved best algorithm for this problem. The project has been implemented in C-language which takes input and store output in an Excel Workbook

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article is motivated by the prominence of one-sided S,s rules in the literature and by the unrealistic strict conditions necessary for their optimality. It aims to assess whether one-sided pricing rules could be an adequate individual rule for macroeconomic models, despite its suboptimality. It aims to answer two questions. First, since agents are not fully rational, is it plausible that they use such a non-optimal rule? Second, even if the agents adopt optimal rules, is the economist committing a serious mistake by assuming that agents use one-sided Ss rules? Using parameters based on real economy data, we found that since the additional cost involved in adopting the simpler rule is relatively small, it is plausible that one-sided rules are used in practice. We also found that suboptimal one-sided rules and optimal two-sided rules are in practice similar, since one of the bounds is not reached very often. We concluded that the macroeconomic effects when one-sided rules are suboptimal are similar to the results obtained under two-sided optimal rules, when they are close to each other. However, this is true only when one-sided rules are used in the context where they are not optimal.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study examines the relationship between morningness-eveningness orientation and time-of day on attitude change, and tests the hypothesis that people will be more persuaded when tested at their optimal time-of-day (i.e., morning for M-types and evening for E-types) than non-optimal time-of-day (i.e., evening for M-Types and morning for E-types). Two hundred and twenty participants read a message that contained either strong vs. weak quality counter-attitudinal arguments (anti-voluntary euthanasia) in the morning (9.00. a.m.) or in the evening (7.00. p.m.). When tested at their respective optimal time-of-day (for both M- and E-types) there was a reliable difference in attitude change between the strong vs. weak messages (indicating message processing had occurred) while there was no difference between strong vs. weak messages when tested at their non-optimal time-of-day. In addition, the amount of message-congruent thinking mediated the attitude change. The results show that M- and E-types pay greater attention to and elaborate on a persuasive message at their optimal time-of-day, and this leads to increased attitude change, compared to those tested at their non-optimal time-of-day. © 2012.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Economics from the NOVA – School of Business and Economics

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Project Management involves onetime endeavors that demand for getting it right the first time. On the other hand, project scheduling, being one of the most modeled project management process stages, still faces a wide gap from theory to practice. Demanding computational models and their consequent call for simplification, divert the implementation of such models in project management tools from the actual day to day project management process. Special focus is being made to the robustness of the generated project schedules facing the omnipresence of uncertainty. An "easy" way out is to add, more or less cleverly calculated, time buffers that always result in project duration increase and correspondingly, in cost. A better approach to deal with uncertainty seems to be to explore slack that might be present in a given project schedule, a fortiori when a non-optimal schedule is used. The combination of such approach to recent advances in modeling resource allocation and scheduling techniques to cope with the increasing flexibility in resources, as can be expressed in "Flexible Resource Constraint Project Scheduling Problem" (FRCPSP) formulations, should be a promising line of research to generate more adequate project management tools. In reality, this approach has been frequently used, by project managers in an ad-hoc way.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: Recently morphometric measurements of the ascending aorta have been done with ECG-gated MDCT to help the development of future endovascular therapies (TCT) [1]. However, the variability of these measurements remains unknown. It will be interesting to know the impact of CAD (computer aided diagnosis) with automated segmentation of the vessel and automatic measurements of diameter on the management of ascending aorta aneurysms. Methods and Materials: Thirty patients referred for ECG-gated CT thoracic angiography (64-row CT scanner) were evaluated. Measurements of the maximum and minimum ascending aorta diameters were obtained automatically with a commercially available CAD and semi-manually by two observers separately. The CAD algorithms segment the iv-enhanced lumen of the ascending aorta into perpendicular planes along the centreline. The CAD then determines the largest and the smallest diameters. Both observers repeated the automatic measurements and the semimanual measurements during a different session at least one month after the first measurements. The Bland and Altman method was used to study the inter/intraobserver variability. A Wilcoxon signed-rank test was also used to analyse differences between observers. Results: Interobserver variability for semi-manual measurements between the first and second observers was between 1.2 to 1.0 mm for maximal and minimal diameter, respectively. Intraobserver variability of each observer ranged from 0.8 to 1.2 mm, the lowest variability being produced by the more experienced observer. CAD variability could be as low as 0.3 mm, showing that it can perform better than human observers. However, when used in nonoptimal conditions (streak artefacts from contrast in the superior vena cava or weak lumen enhancement), CAD has a variability that can be as high as 0.9 mm, reaching variability of semi-manual measurements. Furthermore, there were significant differences between both observers for maximal and minimal diameter measurements (p<0.001). There was also a significant difference between the first observer and CAD for maximal diameter measurements with the former underestimating the diameter compared to the latter (p<0.001). As for minimal diameters, they were higher when measured by the second observer than when measured by CAD (p<0.001). Neither the difference of mean minimal diameter between the first observer and CAD nor the difference of mean maximal diameter between the second observer and CAD was significant (p=0.20 and 0.06, respectively). Conclusion: CAD algorithms can lessen the variability of diameter measurements in the follow-up of ascending aorta aneurysms. Nevertheless, in non-optimal conditions, it may be necessary to correct manually the measurements. Improvements of the algorithms will help to avoid such a situation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Canine distemper virus (CDV) produces a glycosylated type I fusion protein (F) with an internal hydrophobic signal sequence beginning around 115 residues downstream of the first AUG used for translation initiation. Cleavage of the signal sequence yields the F0 molecule, which is cleaved into the F1 and F2 subunits. Surprisingly, when all in-frame AUGs located in the first third of the F gene were mutated a protein of the same molecular size as the F0 molecule was still expressed from both the Onderstepoort (OP) and A75/17-CDV F genes. We designated this protein, which is initiated from a non-AUG codon protein Fx. Site-directed mutagenesis allowed to identify codon 85, a GCC codon coding for alanine, as the most likely position from which translation initiation of Fx occurs in OP-CDV. Deletion analysis demonstrated that at least 60 nucleotides upstream of the GCC codon are required for efficient Fx translation. This sequence is GC-rich, suggesting extensive folding. Secondary structure may therefore be important for translation initiation at codon 85.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aims of this study were to investigate the usefulness of serum C-reactive protein, procalcitonin, tumor necrosis factor alpha, interleukin-6, and interleukin-8 as postmortem markers of sepsis and to compare C-reactive protein and procalcitonin values in serum, vitreous humor, and cerebrospinal fluid in a series of sepsis cases and control subjects, in order to determine whether these measurements may be employed for the postmortem diagnosis of sepsis. Two study groups were formed, a sepsis group (eight subjects coming from the intensive care unit of two university hospitals, with a clinical diagnosis of sepsis in vivo) and control group (ten autopsy cases admitted to two university medicolegal centers, deceased from natural and unnatural causes, without elements to presume an underlying sepsis as the cause of death). Serum C-reactive protein and procalcitonin concentrations were significantly different between sepsis cases and control cases, whereas serum tumor necrosis factor alpha, interleukin-6, and interleukin-8 values were not significantly different between the two groups, suggesting that measurement of interleukin-6, interleukin-8, and tumor necrosis factor alpha is non-optimal for postmortem discrimination of cases with sepsis. In the sepsis group, vitreous procalcitonin was detectable in seven out of eight cases. In the control group, vitreous procalcitonin was clearly detectable only in one case, which also showed an increase of all markers in serum and for which the cause of death was myocardial infarction associated with multi-organic failure. According to the results of this study, the determination of vitreous procalcitonin may be an alternative to the serum procalcitonin for the postmortem diagnosis of sepsis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we propose a simple and general model for computing the Ramsey optimal inflation tax, which includes several models from the previous literature as special cases. We show that it cannot be claimed that the Friedman rule is always optimal (or always non--optimal) on theoretical grounds. The Friedman rule is optimal or not, depending on conditions related to the shape of various relevant functions. One contribution of this paper is to relate these conditions to {\it measurable} variables such as the interest rate or the consumption elasticity of money demand. We find that it tends to be optimal to tax money when there are economies of scale in the demand for money (the scale elasticity is smaller than one) and/or when money is required for the payment of consumption or wage taxes. We find that it tends to be optimal to tax money more heavily when the interest elasticity of money demand is small. We present empirical evidence on the parameters that determine the optimal inflation tax. Calibrating the model to a variety of empirical studies yields a optimal nominal interest rate of less than 1\%/year, although that finding is sensitive to the calibration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many industrial applications, accurate and fast surface reconstruction is essential for quality control. Variation in surface finishing parameters, such as surface roughness, can reflect defects in a manufacturing process, non-optimal product operational efficiency, and reduced life expectancy of the product. This thesis considers reconstruction and analysis of high-frequency variation, that is roughness, on planar surfaces. Standard roughness measures in industry are calculated from surface topography. A fast and non-contact method to obtain surface topography is to apply photometric stereo in the estimation of surface gradients and to reconstruct the surface by integrating the gradient fields. Alternatively, visual methods, such as statistical measures, fractal dimension and distance transforms, can be used to characterize surface roughness directly from gray-scale images. In this thesis, the accuracy of distance transforms, statistical measures, and fractal dimension are evaluated in the estimation of surface roughness from gray-scale images and topographies. The results are contrasted to standard industry roughness measures. In distance transforms, the key idea is that distance values calculated along a highly varying surface are greater than distances calculated along a smoother surface. Statistical measures and fractal dimension are common surface roughness measures. In the experiments, skewness and variance of brightness distribution, fractal dimension, and distance transforms exhibited strong linear correlations to standard industry roughness measures. One of the key strengths of photometric stereo method is the acquisition of higher frequency variation of surfaces. In this thesis, the reconstruction of planar high-frequency varying surfaces is studied in the presence of imaging noise and blur. Two Wiener filterbased methods are proposed of which one is optimal in the sense of surface power spectral density given the spectral properties of the imaging noise and blur. Experiments show that the proposed methods preserve the inherent high-frequency variation in the reconstructed surfaces, whereas traditional reconstruction methods typically handle incorrect measurements by smoothing, which dampens the high-frequency variation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Increased emissions of greenhouse gases into the atmosphere are causing an anthropogenic climate change. The resulting global warming challenges the ability of organisms to adapt to the new temperature conditions. However, warming is not the only major threat. In marine environments, dissolution of carbon dioxide from the atmosphere causes a decrease in surface water pH, the so called ocean acidification. The temperature and acidification effects can interact, and create even larger problems for the marine flora and fauna than either of the effects would cause alone. I have used Baltic calanoid copepods (crustacean zooplankton) as my research object and studied their growth and stress responses using climate predictions projected for the next century. I have studied both direct temperature and pH effects on copepods, and indirect effects via their food: the changing phytoplankton spring bloom composition and toxic cyanobacterium. The main aims of my thesis were: 1) to find out how warming and acidification combined with a toxic cyanobacterium affect copepod reproductive success (egg production, egg viability, egg hatching success, offspring development) and oxidative balance (antioxidant capacity, oxidative damage), and 2) to reveal the possible food quality effects of spring phytoplankton bloom composition dominated by diatoms or dinoflagellates on reproducing copepods (egg production, egg hatching, RNA:DNA ratio). The two copepod genera used, Acartia sp. and Eurytemora affinis are the dominating mesozooplankton taxa (0.2 – 2 mm) in my study area the Gulf of Finland. The 20°C temperature seems to be within the tolerance limits of Acartia spp., because copepods can adapt to the temperature phenotypically by adjusting their body size. Copepods are also able to tolerate a pH decrease of 0.4 from present values, but the combination of warm water and decreased pH causes problems for them. In my studies, the copepod oxidative balance was negatively influenced by the interaction of these two environmental factors, and egg and nauplii production were lower at 20°C and lower pH, than at 20°C and ambient pH. However, presence of toxic cyanobacterium Nodularia spumigena improved the copepod oxidative balance and helped to resist the environmental stress, in question. In addition, adaptive maternal effects seem to be an important adaptation mechanism in a changing environment, but it depends on the condition of the female copepod and her diet how much she can invest in her offspring. I did not find systematic food quality difference between diatoms and dinoflagellates. There are both good and bad diatom and dinoflagellate species. Instead, the dominating species in the phytoplankton bloom composition has a central role in determining the food quality, although copepods aim at obtaining as a balanced diet as possible by foraging on several species. If the dominating species is of poor quality it can cause stress when ingested, or lead to non-optimal foraging if rejected. My thesis demonstrates that climate change induced water temperature and pH changes can cause problems to Baltic Sea copepod communities. However, their resilience depends substantially on their diet, and therefore the response of phytoplankton to the environmental changes. As copepods are an important link in pelagic food webs, their future success can have far reaching consequences, for example on fish stocks.