947 resultados para Level Independent Quasi-Birth-Death (LIQBD) Process
Resumo:
During O antigen lipopolysaccharide (LPS) synthesis in bacteria, transmembrane migration of undecaprenylpyrophosphate (Und-P-P)-bound O antigen subunits occurs before their polymerization and ligation to the rest of the LPS molecule. Despite the general nature of the translocation process, putative O-antigen translocases display a low level of amino acid sequence similarity. In this work, we investigated whether complete O antigen subunits are required for translocation. We demonstrate that a single sugar, GlcNAc, can be incorporated to LPS of Escherichia coli K-12. This incorporation required the functions of two O antigen synthesis genes, wecA (UDP-GlcNAc:Und-P GlcNAc-1-P transferase) and wzx (O-antigen translocase). Complementation experiments with putative O-antigen translocases from E. coli O7 and Salmonella enterica indicated that translocation of O antigen subunits is independent of the chemical structure of the saccharide moiety. Furthermore, complementation with putative translocases involved in synthesis of exopolysaccharides demonstrated that these proteins could not participate in O antigen assembly. Our data indicate that recognition of a complete Und-P-P-bound O antigen subunit is not required for translocation and suggest a model for O antigen synthesis involving recognition of Und-P-P-linked sugars by a putative complex made of Wzx translocase and other proteins involved in the processing of O antigen.
Resumo:
We study the exact entanglement dynamics of two qubits in a common structured reservoir. We demonstrate that for certain classes of entangled states, entanglement sudden death occurs, while for certain initially factorized states, entanglement sudden birth takes place. The backaction of the non-Markovian reservoir is responsible for revivals of entanglement after sudden death has occurred, and also for periods of disentanglement following entanglement sudden birth.
Resumo:
To compare aerobic capacity, strength, flexibility, and activity level in extremely low birth weight (ELBW) adolescents at 17 years of age with term-born control subjects.
Resumo:
Recent trends towards increasingly parallel computers mean that there needs to be a seismic shift in programming practice. The time is rapidly approaching when most programming will be for parallel systems. However, most programming techniques in use today are geared towards sequential, or occasionally small-scale parallel, programming. While refactoring has so far mainly been applied to sequential programs, it is our contention that refactoring can play a key role in significantly improving the programmability of parallel systems, by allowing the programmer to apply a set of well-defined transformations in order to parallelise their programs. In this paper, we describe a new language-independent refactoring approach that helps introduce and tune parallelism through high-level design patterns targeting a set of well-specified parallel skeletons. We believe this new refactoring process is the key to allowing programmers to truly start thinking in parallel. © 2012 ACM.
Resumo:
Background: Large-scale randomised controlled trials are relatively rare in education. The present study approximates to, but is not exactly, a randomised controlled trial. It was an attempt to scale up previous small peer tutoring projects, while investing only modestly in continuing professional development for teachers.Purpose: A two-year study of peer tutoring in reading was undertaken in one local education authority in Scotland. The relative effectiveness of cross-age versus same-age tutoring, light versus intensive intervention, and reading versus reading and mathematics tutoring were investigated.Programme description (if relevant): The intervention was Paired Reading, a freely available cross-ability tutoring method applied to books of the pupils' choice but above the tutee's independent readability level. It involves Reading Together and Reading Alone, and switching from one to the other according to need.Sample: Eighty-seven primary schools of overall average socio-economic status, ability and gender in one council in Scotland. There were few ethnic minority students. Proportions of students with special needs were low. Children were eight and 10 years old as the intervention started. Macro-evaluation n = 3520. Micro-evaluation Year 1 15 schools n = 592, Year 2 a different 15 schools n = 591, compared with a comparison group of five schools n = 240.Design and methods: Almost all the primary schools in the local authority participated and were randomly allocated to condition. A macro-evaluation tested and retested over a two-year period using Performance Indicators in Primary Schools. A micro-evaluation tested and retested within each year using norm-referenced tests of reading comprehension. Macro-evaluation was with multi-level modelling, micro-evaluation with descriptive statistics and effect sizes, analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA).Results: Macro-evaluation yielded significant pre-post gains in reading attainment for cross-age tutoring over both years. No other differences were significant. Micro-evaluation yielded pre-post changes in Year 1 (selected) and Year 2 (random) greater than controls, with no difference between same-age and cross-age tutoring. Light and intensive tutoring were equally effective. Tutoring reading and mathematics together was more effective than only tutoring reading. Lower socio-economic and lower reading ability students did better. Girls did better than boys. Regarding observed implementation quality, some factors were high and others low. Few implementation variables correlated with attainment gain.Conclusions: Paired Reading tutoring does lead to better reading attainment compared with students not participating. This is true in the long term (macro-evaluation) for cross-age tutoring, and in the short term (micro-evaluation) for both cross-age and same-age tutoring. Tutors and tutees benefited. Intensity had no effect but dual tutoring did have an effect. Low-socio-economic status, low-ability and female students did better. The results of the different forms of evaluation were indeed different. There are implications for practice and for future research. © 2012 Copyright Taylor and Francis Group, LLC.
Risk Acceptance in the Furniture Sector: Analysis of Acceptance Level and Relevant Influence Factors
Resumo:
Risk acceptance has been broadly discussed in relation to hazardous risk activities and/or technologies. A better understanding of risk acceptance in occupational settings is also important; however, studies on this topic are scarce. It seems important to understand the level of risk that stakeholders consider sufficiently low, how stakeholders form their opinion about risk, and why they adopt a certain attitude toward risk. Accordingly, the aim of this study is to examine risk acceptance in regard to occupational accidents in furniture industries. The safety climate analysis was conducted through the application of the Safety Climate in Wood Industries questionnaire. Judgments about risk acceptance, trust, risk perception, benefit perception, emotions, and moral values were measured. Several models were tested to explain occupational risk acceptance. The results showed that the level of risk acceptance decreased as the risk level increased. High-risk and death scenarios were assessed as unacceptable. Risk perception, emotions, and trust had an important influence on risk acceptance. Safety climate was correlated with risk acceptance and other variables that influence risk acceptance. These results are important for the risk assessment process in terms of defining risk acceptance criteria and strategies to reduce risks.
Resumo:
Enhanced biological phosphorus removal (EBPR) is the most economic and sustainable option used in wastewater treatment plants (WWTPs) for phosphorus removal. In this process it is important to control the competition between polyphosphate accumulating organisms (PAOs) and glycogen accumulating organisms (GAOs), since EBPR deterioration or failure can be related with the proliferation of GAOs over PAOs. This thesis is focused on the effect of operational conditions (volatile fatty acid (VFA) composition, dissolved oxygen (DO) concentration and organic carbon loading) on PAO and GAO metabolism. The knowledge about the effect of these operational conditions on EBPR metabolism is very important, since they represent key factors that impact WWTPs performance and sustainability. Substrate competition between the anaerobic uptake of acetate and propionate (the main VFAs present in WWTPs) was shown in this work to be a relevant factor affecting PAO metabolism, and a metabolic model was developed that successfully describes this effect. Interestingly, the aerobic metabolism of PAOs was not affected by different VFA compositions, since the aerobic kinetic parameters for phosphorus uptake, polyhydroxyalkanoates (PHAs) degradation and glycogen production were relatively independent of acetate or propionate concentration. This is very relevant for WWTPs, since it will simplify the calibration procedure for metabolic models, facilitating their use for full-scale systems. The DO concentration and aerobic hydraulic retention time (HRT) affected the PAO-GAO competition, where low DO levels or lower aerobic HRT was more favourable for PAOs than GAOs. Indeed, the oxygen affinity coefficient was significantly higher for GAOs than PAOs, showing that PAOs were far superior at scavenging for the often limited oxygen levels in WWTPs. The operation of WWTPs with low aeration is of high importance for full-scale systems, since it decreases the energetic costs and can potentially improve WWTP sustainability. Extended periods of low organic carbon load, which are the most common conditions that exist in full-scale WWTPs, also had an impact on PAO and GAO activity. GAOs exhibited a substantially higher biomass decay rate as compared to PAOs under these conditions, which revealed a higher survival capacity for PAOs, representing an advantage for PAOs in EBPR processes. This superior survival capacity of PAOs under conditions more closely resembling a full-scale environment was linked with their ability to maintain a residual level of PHA reserves for longer than GAOs, providing them with an effective energy source for aerobic maintenance processes. Overall, this work shows that each of these key operational conditions play an important role in the PAO-GAO competition and should be considered in WWTP models in order to improve EBPR processes.
Resumo:
The paper is motivated by the valuation problem of guaranteed minimum death benefits in various equity-linked products. At the time of death, a benefit payment is due. It may depend not only on the price of a stock or stock fund at that time, but also on prior prices. The problem is to calculate the expected discounted value of the benefit payment. Because the distribution of the time of death can be approximated by a combination of exponential distributions, it suffices to solve the problem for an exponentially distributed time of death. The stock price process is assumed to be the exponential of a Brownian motion plus an independent compound Poisson process whose upward and downward jumps are modeled by combinations (or mixtures) of exponential distributions. Results for exponential stopping of a Lévy process are used to derive a series of closed-form formulas for call, put, lookback, and barrier options, dynamic fund protection, and dynamic withdrawal benefit with guarantee. We also discuss how barrier options can be used to model lapses and surrenders.
Resumo:
This study investigated loss, death and dying, reminiscing, coping and the process of adaptation from the sUbjective perspective. A number of theories and models of death and dying were reviewed in the background literature search with the focus on reminiscing as a coping phenomenon. The format of the study was audio-taped interviews with ten sUbjects and the recording of their memories and reminiscing of life stories. The sUbjects were required to complete an initial questionnaire in a demographic data collection process. Two separate interviews consisted of a primary data collecting interview and a verification interview four to eight weeks later. An independent chart review completed the data collecting process. Data analysis was by the examination of the emerging themes in the subjects' personal narratives which revealed the sUb-categories of reminiscing, loss (including death and dying), acceptance, hope, love, despair and belief. Belief was shown to be the foundation and the base for living and reminiscing. Reminiscing was found to be a coping phenomenon, within the foundation of a belief system. Both living and reminiscing revealed the existence of a central belief or value with a great deal of importance attached to it. Whether the belief was of a spiritual nature, a value of marriage, tradition, a work ethic or belief in an abstract value such as fate,it gave support and control to the individuals' living and reminiscing process. That which caused despair or allowed acceptance indicated the sUbjects' basic belief and was identified in the story narrations. The findings were significant to health care in terms of education, increased dignity for the elderly and better understanding by society. The profiles represented an average age of 86.3 years with age showing no bearing on the life experiences associated with the emerging themes. Overwhelmingly, belief was shown to be the foundation in reminiscing. A Judeo-Christian cultural value base supported the belief in 90% of the sUbjects; however, different beliefs were clearly shown indicating that belief is central to all thinking beings, in everyday life and in reminiscing. Belief was not necessarily spiritual or a practised or verbalized religion. It was shown to be a way of understanding, a fundamental and single thread tying the individual's life and stories together. The benefits were the outcomes, in that knowledge of an individual's belief can optimize care planning for any age group, and/or setting. The strength of the study was the open question format and the feedback process of data verification. The unrestricted outcomes and non-specificity were significant in a world where dying is everybody's business.
Resumo:
This thesis examines the independent alternative music scene in the city of Hamilton, Ontario, also known, with reference to its industrial heritage, as "Steeltown." Drawing on the growing literature on the relationship between place and popular music, on my own experience as a local musician, direct observation of performances and of venues and other sites of interaction, as well as ethnographic interviews with scene participants, I focus on the role of space, genre and performance within the scene, and their contribution to a sense of local identity. In particular, I argue that the live performance event is essential to the success of the local music scene, as it represents an immediate process, a connection between performers and audience, one which is temporally rooted in the present. My research suggests that the Hamilton alternative music scene has become postmodern, embracing forms of "indie" music that lie outside of mainstream taste, and particularly those which engage in the exploration and deconstruction of pre-existing genres. Eventually, however, the creative successes of an "indiescene" permeate mass culture and often become co-opted into the popular music mainstream, a process which, in turn, promotes new experimentation and innovation at the local level.
Resumo:
The initial timing of face-specific effects in event-related potentials (ERPs) is a point of contention in face processing research. Although effects during the time of the N170 are robust in the literature, inconsistent effects during the time of the P100 challenge the interpretation of the N170 as being the initial face-specific ERP effect. The interpretation of the early P100 effects are often attributed to low-level differences between face stimuli and a host of other image categories. Research using sophisticated controls for low-level stimulus characteristics (Rousselet, Husk, Bennett, & Sekuler, 2008) report robust face effects starting at around 130 ms following stimulus onset. The present study examines the independent components (ICs) of the P100 and N170 complex in the context of a minimally controlled low-level stimulus set and a clear P100 effect for faces versus houses at the scalp. Results indicate that four ICs account for the ERPs to faces and houses in the first 200ms following stimulus onset. The IC that accounts for the majority of the scalp N170 (icNla) begins dissociating stimulus conditions at approximately 130 ms, closely replicating the scalp results of Rousselet et al. (2008). The scalp effects at the time of the P100 are accounted for by two constituent ICs (icP1a and icP1b). The IC that projects the greatest voltage at the scalp during the P100 (icP1a) shows a face-minus-house effect over the period of the P100 that is less robust than the N 170 effect of icN 1 a when measured as the average of single subject differential activation robustness. The second constituent process of the P100 (icP1b), although projecting a smaller voltage to the scalp than icP1a, shows a more robust effect for the face-minus-house contrast starting prior to 100 ms following stimulus onset. Further, the effect expressed by icP1 b takes the form of a larger negative projection to medial occipital sites for houses over faces partially canceling the larger projection of icP1a, thereby enhancing the face positivity at this time. These findings have three main implications for ERP research on face processing: First, the ICs that constitute the face-minus-house P100 effect are independent from the ICs that constitute the N170 effect. This suggests that the P100 effect and the N170 effect are anatomically independent. Second, the timing of the N170 effect can be recovered from scalp ERPs that have spatio-temporally overlapping effects possibly associated with low-level stimulus characteristics. This unmixing of the EEG signals may reduce the need for highly constrained stimulus sets, a characteristic that is not always desirable for a topic that is highly coupled to ecological validity. Third, by unmixing the constituent processes of the EEG signals new analysis strategies are made available. In particular the exploration of the relationship between cortical processes over the period of the P100 and N170 ERP complex (and beyond) may provide previously unaccessible answers to questions such as: Is the face effect a special relationship between low-level and high-level processes along the visual stream?
Resumo:
Sir George Yonge (1732-1812) was a baronet, politician and colonial governor. He was a Member of Parliament for Honiton from 1754-1761 and 1763-1796, British Secretary at War from 1782-1783 and 1783-1794, and Governor of the Cape colony from 1799-1801.
Resumo:
A l’époque de la Nouvelle-France, il n’était pas rare que des enfants de moins d’un an décèdent. Les parents acceptaient avec sagesse et résignation le décès de leurs enfants. Telle était la volonté du Tout-Puissant. Grâce au Registre de la Population du Québec Ancien (R.P.Q.A.) élaboré par le Programme de Recherche en Démographie Historique (P.R.D.H), l’ampleur de la mortalité infantile a pu être mesurée selon plusieurs critères, quelques facteurs déterminants examinés ainsi qu’une composante intergénérationnelle identifiée. Couvrant pour la première fois la totalité de l’existence de la colonie, nos résultats confirment l’importance de la mortalité des enfants aux XVIIe et XVIIIe siècles (entre 140 et 260‰ avant correction pour le sous-enregistrement des décès). Des disparités tangibles ont été constatées entre les sexes, selon le lieu de naissance ainsi que selon la catégorie professionnelle à laquelle appartient le père de l’enfant. L’inégalité des probabilités de survie des tout-petits reflète l’iniquité physiologique entre les genres, avec une surmortalité masculine de l’ordre de 20%, et l’influence de l’environnement dans lequel vit la famille : les petits de la ville de Québec décédaient en moyenne 1,5 à 1,2 fois plus que les petits des campagnes. Montréal, véritable hécatombe pour l’instant inexpliquée, perdait 50% de ses enfants avant l’âge d’un an, ce qui représente 1,9 fois plus de décès infantiles que ceux des enfants de la campagne, qui jouissent malgré tout des bienfaits de leur environnement. Les effets délétères de l’usage de la mise en nourrice, qui touche plus de la moitié des enfants des classes aisées citadines, ravagent leur descendance de plus en plus profondément. L’examen de la mortalité infantile sous ses composantes endogène et exogène révèle que la mortalité de causes exogènes explique au moins 70% de tous les décès infantiles. La récurrence des maladies infectieuses, l’absence d’hygiène personnelle, l’insalubrité des villes constituaient autant de dangers pour les enfants. Dans une perspective davantage familiale et intergénérationnelle où l’enfant est partie intégrante d’une fratrie, des risques significatifs ont été obtenus pour plusieurs caractéristiques déterminantes. Les mères de moins de 20 ans ou de plus de 30 ans, les enfants de rang de naissance supérieur à 8, un intervalle intergénésique inférieur à 21 mois ou avoir son aîné décédé accroissent les risques de décéder avant le premier anniversaire de l’ordre de 10 à 70%, parce que le destin d’un enfant n’est pas indépendant des caractéristiques de sa mère ou de sa fratrie. Nous avons aussi constaté une relation positive entre la mortalité infantile expérimentée par une mère et celle de ses filles. La distribution observée des filles ayant perdu au moins 40% de leurs enfants au même titre que leur mère est 1,3 à 1,9 fois plus grande que celle attendue pour les filles ayant eu 9 enfants et moins ou 10 enfants et plus. Il existerait une transmission intergénérationnelle de la mortalité infantile même lorsqu’on contrôle pour la période et la taille de la famille.