957 resultados para chains with unbounded variable length memory
Resumo:
Dyskeratosis congenita (DC) is an inherited bone marrow failure syndrome in which the known susceptibility genes (DKC1, TERC, and TERT) belong to the telomere maintenance pathway; patients with DC have very short telomeres. We used multicolor flow fluorescence in situ hybridization analysis of median telomere length in total blood leukocytes, granulocytes, lymphocytes, and several lymphocyte subsets to confirm the diagnosis of DC, distinguish patients with DC from unaffected family members, identify clinically silent DC carriers, and discriminate between patients with DC and those with other bone marrow failure disorders. We defined "very short" telomeres as below the first percentile measured among 400 healthy control subjects over the entire age range. Diagnostic sensitivity and specificity of very short telomeres for DC were more than 90% for total lymphocytes, CD45RA+/CD20- naive T cells, and CD20+ B cells. Granulocyte and total leukocyte assays were not specific; CD45RA- memory T cells and CD57+ NK/NKT were not sensitive. We observed very short telomeres in a clinically normal family member who subsequently developed DC. We propose adding leukocyte subset flow fluorescence in situ hybridization telomere length measurement to the evaluation of patients and families suspected to have DC, because the correct diagnosis will substantially affect patient management.
Resumo:
The problem of optimal design of a multi-gravity-assist space trajectories, with free number of deep space maneuvers (MGADSM) poses multi-modal cost functions. In the general form of the problem, the number of design variables is solution dependent. To handle global optimization problems where the number of design variables varies from one solution to another, two novel genetic-based techniques are introduced: hidden genes genetic algorithm (HGGA) and dynamic-size multiple population genetic algorithm (DSMPGA). In HGGA, a fixed length for the design variables is assigned for all solutions. Independent variables of each solution are divided into effective and ineffective (hidden) genes. Hidden genes are excluded in cost function evaluations. Full-length solutions undergo standard genetic operations. In DSMPGA, sub-populations of fixed size design spaces are randomly initialized. Standard genetic operations are carried out for a stage of generations. A new population is then created by reproduction from all members based on their relative fitness. The resulting sub-populations have different sizes from their initial sizes. The process repeats, leading to increasing the size of sub-populations of more fit solutions. Both techniques are applied to several MGADSM problems. They have the capability to determine the number of swing-bys, the planets to swing by, launch and arrival dates, and the number of deep space maneuvers as well as their locations, magnitudes, and directions in an optimal sense. The results show that solutions obtained using the developed tools match known solutions for complex case studies. The HGGA is also used to obtain the asteroids sequence and the mission structure in the global trajectory optimization competition (GTOC) problem. As an application of GA optimization to Earth orbits, the problem of visiting a set of ground sites within a constrained time frame is solved. The J2 perturbation and zonal coverage are considered to design repeated Sun-synchronous orbits. Finally, a new set of orbits, the repeated shadow track orbits (RSTO), is introduced. The orbit parameters are optimized such that the shadow of a spacecraft on the Earth visits the same locations periodically every desired number of days.
Resumo:
The accuracy of simulating the aerodynamics and structural properties of the blades is crucial in the wind-turbine technology. Hence the models used to implement these features need to be very precise and their level of detailing needs to be high. With the variety of blade designs being developed the models should be versatile enough to adapt to the changes required by every design. We are going to implement a combination of numerical models which are associated with the structural and the aerodynamic part of the simulation using the computational power of a parallel HPC cluster. The structural part models the heterogeneous internal structure of the beam based on a novel implementation of the Generalized Timoshenko Beam Model Technique.. Using this technique the 3-D structure of the blade is reduced into a 1-D beam which is asymptotically equivalent. This reduces the computational cost of the model without compromising its accuracy. This structural model interacts with the Flow model which is a modified version of the Blade Element Momentum Theory. The modified version of the BEM accounts for the large deflections of the blade and also considers the pre-defined structure of the blade. The coning, sweeping of the blade, tilt of the nacelle and the twist of the sections along the blade length are all computed by the model which aren’t considered in the classical BEM theory. Each of these two models provides feedback to the other and the interactive computations lead to more accurate outputs. We successfully implemented the computational models to analyze and simulate the structural and aerodynamic aspects of the blades. The interactive nature of these models and their ability to recompute data using the feedback from each other makes this code more efficient than the commercial codes available. In this thesis we start off with the verification of these models by testing it on the well-known benchmark blade for the NREL-5MW Reference Wind Turbine, an alternative fixed-speed stall-controlled blade design proposed by Delft University, and a novel alternative design that we proposed for a variable-speed stall-controlled turbine, which offers the potential for more uniform power control and improved annual energy production.. To optimize the power output of the stall-controlled blade we modify the existing designs and study their behavior using the aforementioned aero elastic model.
Resumo:
Two factors that have been suggested as key in explaining individual differences in fluid intelligence are working memory and sensory discrimination ability. A latent variable approach was used to explore the relative contributions of these two variables to individual differences in fluid intelligence in middle to late childhood. A sample of 263 children aged 7–12 years was examined. Correlational analyses showed that general discrimination ability (GDA)and working memory (WM) were related to each other and to fluid intelligence. Structural equation modeling showed that within both younger and older age groups and the sample as a whole, the relation between GDA and fluid intelligence could be accounted for by WM. While WM was able to predict variance in fluid intelligence above and beyond GDA, GDA was not able to explain significant amounts of variance in fluid intelligence, either in the whole sample or within the younger or older age group. We concluded that compared to GDA, WM should be considered the better predictor of individual differences in fluid intelligence in childhood. WM and fluid intelligence, while not being separable in middle childhood, develop at different rates, becoming more separable with age.
Resumo:
Three rhesus monkeys (Macaca mulatta) and four pigeons (Columba livia) were trained in a visual serial probe recognition (SPR) task. A list of visual stimuli (slides) was presented sequentially to the subjects. Following the list and after a delay interval, a probe stimulus was presented that could be either from the list (Same) or not from the list (Different). The monkeys readily acquired a variable list length SPR task, while pigeons showed acquisition only under constant list length condition. However, monkeys memorized the responses to the probes (absolute strategy) when overtrained with the same lists and probes, while pigeons compared the probe to the list in memory (relational strategy). Performance of the pigeon on 4-items constant list length was disrupted when blocks of trials of different list lengths were imbedded between the 4-items blocks. Serial position curves for recognition at variable probe delays showed better relative performance on the last items of the list at short delays (0-0.5 seconds) and better relative performance on the initial items of the list at long delays (6-10 seconds for the pigeons and 20-30 seconds for the monkeys and a human adolescent). The serial position curves also showed reliable primacy and recency effects at intermediate probe delays. The monkeys showed evidence of using a relational strategy in the variable probe delay task. The results are the first demonstration of relational serial probe recognition performance in an avian and suggest similar underlying dynamic recognition memory mechanisms in primates and avians. ^
Resumo:
Polyethylene chains in the amorphous region between two crystalline lamellae M unit apart are modeled as random walks with one-step memory on a cubic lattice between two absorbing boundaries. These walks avoid the two preceding steps, though they are not true self-avoiding walks. Systems of difference equations are introduced to calculate the statistics of the restricted random walks. They yield that the fraction of loops is (2M - 2)/(2M + 1), the fraction of ties 3/(2M + 1), the average length of loops 2M - 0.5, the average length of ties 2/3M2 + 2/3M - 4/3, the average length of walks equals 3M - 3, the variance of the loop length 16/15M3 + O(M2), the variance of the tie length 28/45M4 + O(M3), and the variance of the walk length 2M3 + O(M2).
Resumo:
The present study has assessed the replicative history and the residual replicative potential of human naive and memory T cells. Telomeres are unique terminal chromosomal structures whose length has been shown to decrease with cell division in vitro and with increased age in vivo for human somatic cells. We therefore assessed telomere length as a measure of the in vivo replicative history of naive and memory human T cells. Telomeric terminal restriction fragments were found to be 1.4 +/- 0.1 kb longer in CD4+ naive T cells than in memory cells from the same donors, a relationship that remained constant over a wide range of donor age. These findings suggest that the differentiation of memory cells from naive precursors occurs with substantial clonal expansion and that the magnitude of this expansion is, on average, similar over a wide range of age. In addition, when replicative potential was assessed in vitro, it was found that the capacity of naive cells for cell division was 128-fold greater as measured in mean population doublings than the capacity of memory cells from the same individuals. Human CD4+ naive and memory cells thus differ in in vivo replicative history, as reflected in telomeric length, and in their residual replicative capacity.
Resumo:
We previously showed that working memory (WM) performance of subclinical checkers can be affected if they are presented with irrelevant but misleading information during the retention period (Harkin and Kessler, 2009, 2011). The present study differed from our previous research in the three crucial aspects. Firstly, we employed ecologically valid stimuli in form of electrical kitchen appliances on a kitchen countertop in order to address previous criticism of our research with letters in locations as these may not have tapped into the primary concerns of checkers. Secondly, we tested whether these ecological stimuli would allow us to employ a simpler (un-blocked) design while obtaining similarly robust results. Thirdly, in Experiment 2 we improved the measure of confidence as a metacognitive variable by using a quantitative scale (0–100), which indeed revealed more robust effects that were quantitatively related to accuracy of performance. The task in the present study was to memorize four appliances, including their states (on/off), and their locations on the kitchen countertop. Memory accuracy was tested for the states of appliances in Experiment 1, and for their locations in Experiment 2. Intermediate probes were identical in both experiments and were administered during retention on 66.7% of the trials with 50% resolvable and 50% irresolvable/misleading probes. Experiment 1 revealed the efficacy of the employed stimuli by revealing a general impairment of high- compared to low checkers, which confirmed the ecological validity of our stimuli. In Experiment 2 we observed the expected, more differentiated pattern: High checkers were not generally affected in their WM performance (i.e., no general capacity issue); instead they showed a particular impairment in the misleading distractor-probe condition. Also, high checkers’ confidence ratings were indicative of a general impairment in metacognitive functioning. We discuss how specific executive dysfunction and general metacognitive impairment may affect memory traces in the short- and in the long-term.
Resumo:
Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.
Resumo:
Episodic memory is impaired in multiple sclerosis (MS) patients, possibly because of deficits in working memory (WM) functioning. If so, WM alterations should necessarily be found in patients with episodic memory deficits, but this has not yet been demonstrated. In this study we aimed at determining whether episodic memory deficits in relapsing-remitting MS are found in conjunction with impaired WM. We evaluated 32 MS patients and 32 matched healthy controls. Nineteen of the 32 patients had episodic memory impairment, and as a group only these individuals showed deficits in WM capacity, which may lead to difficulty in encoding, and/or retrieving information from episodic memory.
Resumo:
This paper deals with the long run average continuous control problem of piecewise deterministic Markov processes (PDMPs) taking values in a general Borel space and with compact action space depending on the state variable. The control variable acts on the jump rate and transition measure of the PDMP, and the running and boundary costs are assumed to be positive but not necessarily bounded. Our first main result is to obtain an optimality equation for the long run average cost in terms of a discrete-time optimality equation related to the embedded Markov chain given by the postjump location of the PDMP. Our second main result guarantees the existence of a feedback measurable selector for the discrete-time optimality equation by establishing a connection between this equation and an integro-differential equation. Our final main result is to obtain some sufficient conditions for the existence of a solution for a discrete-time optimality inequality and an ordinary optimal feedback control for the long run average cost using the so-called vanishing discount approach. Two examples are presented illustrating the possible applications of the results developed in the paper.
Resumo:
This study aimed to describe the benefits of memory training for older adults with low education. Twenty-nine healthy older adults with zero to two years of formal education participated. Sixteen participants received training based on categorization (categorization group = CATG) and 13 received training based on mental images (imagery group = IMG). One group served as control for the other because they trained with different strategies. Training was offered in eight sessions of 90 minutes. The participants were evaluated pre- and posttraining. IMG improved performance in episodic memory tests and had reduced depressive symptoms. CATG increased the use of categorization but did not increase performance in episodic memory tests. Results suggest that the strategy based on the creation of mental images was more effective for older adults with low formal education.
Resumo:
This work extends a previously presented refined sandwich beam finite element (FE) model to vibration analysis, including dynamic piezoelectric actuation and sensing. The mechanical model is a refinement of the classical sandwich theory (CST), for which the core is modelled with a third-order shear deformation theory (TSDT). The FE model is developed considering, through the beam length, electrically: constant voltage for piezoelectric layers and quadratic third-order variable of the electric potential in the core, while meclianically: linear axial displacement, quadratic bending rotation of the core and cubic transverse displacement of the sandwich beam. Despite the refinement of mechanical and electric behaviours of the piezoelectric core, the model leads to the same number of degrees of freedom as the previous CST one due to a two-step static condensation of the internal dof (bending rotation and core electric potential third-order variable). The results obtained with the proposed FE model are compared to available numerical, analytical and experimental ones. Results confirm that the TSDT and the induced cubic electric potential yield an extra stiffness to the sandwich beam. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Objectives - A highly adaptive aspect of human memory is the enhancement of explicit, consciously accessible memory by emotional stimuli. We studied the performance of Alzheimer`s disease (AD) patients and elderly controls using a memory battery with emotional content, and we correlated these results with the amygdala and hippocampus volume. Methods - Twenty controls and 20 early AD patients were subjected to the International Affective Picture System (IAPS) and to magnetic resonance imaging-based volumetric measurements of the medial temporal lobe structures. Results - The results show that excluding control group subjects with 5 or more years of schooling, both groups showed improvement with pleasant or unpleasant figures for the IAPS in an immediate free recall test. Likewise, in a delayed free recall test, both the controls and the AD group showed improvement for pleasant pictures, when education factor was not controlled. The AD group showed improvement in the immediate and delayed free recall test proportional to the medial temporal lobe structures, with no significant clinical correlation between affective valence and amygdala volume. Conclusion - AD patients can correctly identify emotions, at least at this early stage, but this does not improve their memory performance.
Resumo:
A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.