921 resultados para Decoding complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Global complexity of spontaneous brain electric activity was studied before and after chewing gum without flavor and with 2 different flavors. One-minute, 19-channel, eyes-closed electroencephalograms (EEG) were recorded from 20 healthy males before and after using 3 types of chewing gum: regular gum containing sugar and aromatic additives, gum containing 200 mg theanine (a constituent of Japanese green tea), and gum base (no sugar, no aromatic additives); each was chewed for 5 min in randomized sequence. Brain electric activity was assessed through Global Omega (Ω)-Complexity and Global Dimensional Complexity (GDC), quantitative measures of complexity of the trajectory of EEG map series in state space; their differences from pre-chewing data were compared across gum-chewing conditions. Friedman Anova (p < 0.043) showed that effects on Ω-Complexity differed significantly between conditions and differences were maximal between gum base and theanine gum. No differences were found using GDC. Global Omega-Complexity appears to be a sensitive measure for subtle, central effects of chewing gum with and without flavor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Nocturnal dreams can be considered as a kind of simulation of the real world on a higher cognitive level (Erlacher & Schredl, 2008). Within lucid dreams, the dreamer is aware of the dream state and thus able to control the ongoing dream content. Previous studies could demonstrate that it is possible to practice motor tasks during lucid dreams and doing so improved performance while awake (Erlacher & Schredl, 2010). Even though lucid dream practice might be a promising kind of cognitive rehearsal in sports, little is known about the characteristics of actions in lucid dreams. The purpose of the present study was to explore the relationship between time in dreams and wakefulness because in an earlier study (Erlacher & Schredl, 2004) we found that performing squads took lucid dreamers 44.5 % more time than in the waking state while for counting the same participants showed no differences between dreaming and wakefulness. To find out if the task modality, the task length or the task complexity require longer times in lucid dreams than in wakefulness three experiments were conducted. Methods: In the first experiment five proficient lucid dreamers spent two to three non-consecutive nights in the sleep laboratory with polysomnographic recording to control for REM sleep and determine eye signals. Participants counted from 1-10, 1-20 and 1-30 in wakefulness and in their lucid dreams. While dreaming they marked onset of lucidity as well as beginning and end of the counting task with a Left-Right-Left-Right eye movement and reported their dreams after being awakened. The same procedure was used for the second experiment with seven lucid dreamers except that they had to walk 10, 20 or 30 steps. In the third experiment nine participants performed an exercise involving gymnastics elements such as various jumps and a roll. To control for length of the task the gymnastic exercise in the waking state lasted about the same time as walking 10 steps. Results: As a general result we found – as in the study before – that performing a task in the lucid dream requires more time than in wakefulness. This tendency was found for all three tasks. However, there was no difference for the task modality (counting vs. motor task). Also the relative time for the different lengths of the tasks showed no difference. And finally, the more complex motor task (gymnastic routine) did not require more time in lucid dreams than the simple motor task. Discussion/Conclusion: The results showed that there is a robust effect of time in lucid dreams compared to wakefulness. The three experiments could not explain that those differences are caused by task modality, task length or task complexity. Therefore further possible candidates needs to be investigated e.g. experience in lucid dreaming or psychological variables. References: Erlacher, D. & Schredl, M. (2010). Practicing a motor task in a lucid dream enhances subsequent performance: A pilot study. The Sport Psychologist, 24(2), 157-167. Erlacher, D. & Schredl, M. (2008). Do REM (lucid) dreamed and executed actions share the same neural substrate? International Journal of Dream Research, 1(1), 7-13. Erlacher, D. & Schredl, M. (2004). Time required for motor activity in lucid dreams. Perceptual and Motor Skills, 99, 1239-1242.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

CONTEXT The necessity of specific intervention components for the successful treatment of patients with posttraumatic stress disorder is the subject of controversy. OBJECTIVE To investigate the complexity of clinical problems as a moderator of relative effects between specific and nonspecific psychological interventions. METHODS We included 18 randomized controlled trials, directly comparing specific and nonspecific psychological interventions. We conducted moderator analyses, including the complexity of clinical problems as predictor. RESULTS Our results have confirmed the moderate superiority of specific over nonspecific psychological interventions; however, the superiority was small in studies with complex clinical problems and large in studies with noncomplex clinical problems. CONCLUSIONS For patients with complex clinical problems, our results suggest that particular nonspecific psychological interventions may be offered as an alternative to specific psychological interventions. In contrast, for patients with noncomplex clinical problems, specific psychological interventions are the best treatment option.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the problem of distributed sensors' failure detection in networks with a small number of defective sensors, whose measurements differ significantly from the neighbor measurements. We build on the sparse nature of the binary sensor failure signals to propose a novel distributed detection algorithm based on gossip mechanisms and on Group Testing (GT), where the latter has been used so far in centralized detection problems. The new distributed GT algorithm estimates the set of scattered defective sensors with a low complexity distance decoder from a small number of linearly independent binary messages exchanged by the sensors. We first consider networks with one defective sensor and determine the minimal number of linearly independent messages needed for its detection with high probability. We then extend our study to the multiple defective sensors detection by modifying appropriately the message exchange protocol and the decoding procedure. We show that, for small and medium sized networks, the number of messages required for successful detection is actually smaller than the minimal number computed theoretically. Finally, simulations demonstrate that the proposed method outperforms methods based on random walks in terms of both detection performance and convergence rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers a framework where data from correlated sources are transmitted with the help of network coding in ad hoc network topologies. The correlated data are encoded independently at sensors and network coding is employed in the intermediate nodes in order to improve the data delivery performance. In such settings, we focus on the problem of reconstructing the sources at decoder when perfect decoding is not possible due to losses or bandwidth variations. We show that the source data similarity can be used at decoder to permit decoding based on a novel and simple approximate decoding scheme. We analyze the influence of the network coding parameters and in particular the size of finite coding fields on the decoding performance. We further determine the optimal field size that maximizes the expected decoding performance as a trade-off between information loss incurred by limiting the resolution of the source data and the error probability in the reconstructed data. Moreover, we show that the performance of the approximate decoding improves when the accuracy of the source model increases even with simple approximate decoding techniques. We provide illustrative examples showing how the proposed algorithm can be deployed in sensor networks and distributed imaging applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Myxobacteria are single-celled, but social, eubacterial predators. Upon starvation they build multicellular fruiting bodies using a developmental program that progressively changes the pattern of cell movement and the repertoire of genes expressed. Development terminates with spore differentiation and is coordinated by both diffusible and cell-bound signals. The growth and development of Myxococcus xanthus is regulated by the integration of multiple signals from outside the cells with physiological signals from within. A collection of M. xanthus cells behaves, in many respects, like a multicellular organism. For these reasons M. xanthus offers unparalleled access to a regulatory network that controls development and that organizes cell movement on surfaces. The genome of M. xanthus is large (9.14 Mb), considerably larger than the other sequenced delta-proteobacteria. We suggest that gene duplication and divergence were major contributors to genomic expansion from its progenitor. More than 1,500 duplications specific to the myxobacterial lineage were identified, representing >15% of the total genes. Genes were not duplicated at random; rather, genes for cell-cell signaling, small molecule sensing, and integrative transcription control were amplified selectively. Families of genes encoding the production of secondary metabolites are overrepresented in the genome but may have been received by horizontal gene transfer and are likely to be important for predation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Intensity modulated radiation therapy (IMRT) is a technique that delivers a highly conformal dose distribution to a target volume while attempting to maximally spare the surrounding normal tissues. IMRT is a common treatment modality used for treating head and neck (H&N) cancers, and the presence of many critical structures in this region requires accurate treatment delivery. The Radiological Physics Center (RPC) acts as both a remote and on-site quality assurance agency that credentials institutions participating in clinical trials. To date, about 30% of all IMRT participants have failed the RPC’s remote audit using the IMRT H&N phantom. The purpose of this project is to evaluate possible causes of H&N IMRT delivery errors observed by the RPC, specifically IMRT treatment plan complexity and the use of improper dosimetry data from machines that were thought to be matched but in reality were not. Eight H&N IMRT plans with a range of complexity defined by total MU (1460-3466), number of segments (54-225), and modulation complexity scores (MCS) (0.181-0.609) were created in Pinnacle v.8m. These plans were delivered to the RPC’s H&N phantom on a single Varian Clinac. One of the IMRT plans (1851 MU, 88 segments, and MCS=0.469) was equivalent to the median H&N plan from 130 previous RPC H&N phantom irradiations. This average IMRT plan was also delivered on four matched Varian Clinac machines and the dose distribution calculated using a different 6MV beam model. Radiochromic film and TLD within the phantom were used to analyze the dose profiles and absolute doses, respectively. The measured and calculated were compared to evaluate the dosimetric accuracy. All deliveries met the RPC acceptance criteria of ±7% absolute dose difference and 4 mm distance-to-agreement (DTA). Additionally, gamma index analysis was performed for all deliveries using a ±7%/4mm and ±5%/3mm criteria. Increasing the treatment plan complexity by varying the MU, number of segments, or varying the MCS resulted in no clear trend toward an increase in dosimetric error determined by the absolute dose difference, DTA, or gamma index. Varying the delivery machines as well as the beam model (use of a Clinac 6EX 6MV beam model vs. Clinac 21EX 6MV model), also did not show any clear trend towards an increased dosimetric error using the same criteria indicated above.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Involvement of E. coli 23S ribosomal RNA (rRNA) in decoding of termination codons was first indicated by the characterization of a 23S rRNA mutant that causes UGA-specific nonsense suppression. The work described here was begun to test the hypothesis that more 23S rRNA suppressors of specific nonsense mutations can be isolated and that they would occur non-randomly in the rRNA genes and be clustered in specific, functionally significant regions of rRNA.^ Approximately 2 kilobases of the gene for 23S rRNA were subjected to PCR random mutagenesis and the amplified products screened for suppression of nonsense mutations in trpA. All of the suppressor mutations obtained were located in a thirty-nucleotide part of the GTPase center, a conserved rRNA sequence and structure, and they and others made in that region by site-directed mutagenesis were shown to be UGA-specific in their suppression of termination codon mutations. These results proved the initial hypothesis and demonstrated that a group of nucleotides in this region are involved in decoding of the UGA termination codon. Further, it was shown that limitation of cellular availability or synthesis of L11, a ribosomal protein that binds to the GTPase center rRNA, resulted in suppression of termination codon mutations, suggesting the direct involvement of L11 in termination in vivo.^ Finally, in vivo analysis of certain site-specific mutations made in the GTPase center RNA demonstrated that (a) the G$\cdot$A base pair closing the hexanucleotide hairpin loop was not essential for normal termination, (b) the "U-turn" structure in the 1093 to 1098 hexaloop is critical for normal termination, (c) nucleotides A1095 and A1067, necessary for the binding to ribosomes of thiostrepton, an antibiotic that inhibits polypeptide release factor binding to ribosomes in vitro, are also necessary for normal peptide chain termination in vivo, and (d) involvement of this region of rRNA in termination is determined by some unique subset structure that includes particular nucleotides rather than merely by a general structural feature of the GTPase center.^ This work advances the understanding of peptide chain termination by demonstrating that the GTPase region of 23S rRNA participates in recognition of termination codons, through an associated ribosomal protein and specific conserved nucleotides and structural motifs in its RNA. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two regions in the 3$\prime$ domain of 16S rRNA (the RNA of the small ribosomal subunit) have been implicated in decoding of termination codons. Using segment-directed PCR random mutagenesis, I isolated 33 translational suppressor mutations in the 3$\prime$ domain of 16S rRNA. Characterization of the mutations by both genetic and biochemical methods indicated that some of the mutations are defective in UGA-specific peptide chain termination and that others may be defective in peptide chain termination at all termination codons. The studies of the mutations at an internal loop in the non-conserved region of helix 44 also indicated that this structure, in a non-conserved region of 16S rRNA, is involved in both peptide chain termination and assembly of 16S rRNA.^ With a suppressible trpA UAG nonsense mutation, a spontaneously arising translational suppressor mutation was isolated in the rrnB operon cloned into a pBR322-derived plasmid. The mutation caused suppression of UAG at two codon positions in trpA but did not suppress UAA or UGA mutations at the same trpA positions. The specificity of the rRNA suppressor mutation suggests that it may cause a defect in UAG-specific peptide chain termination. The mutation is a single nucleotide deletion (G2484$\Delta$) in helix 89 of 23S rRNA (the large RNA of the large ribosomal subunit). The result indicates a functional interaction between two regions of 23S rRNA. Furthermore, it provides suggestive in vivo evidence for the involvement of the peptidyl-transferase center of 23S rRNA in peptide chain termination. The $\Delta$2484 and A1093/$\Delta$2484 (double) mutations were also observed to alter the decoding specificity of the suppressor tRNA lysT(U70), which has a mutation in its acceptor stem. That result suggests that there is an interaction between the stem-loop region of helix 89 of 23S rRNA and the acceptor stem of tRNA during decoding and that the interaction is important for the decoding specificity of tRNA.^ Using gene manipulation procedures, I have constructed a new expression vector to express and purify the cellular protein factors required for a recently developed, realistic in vitro termination assay. The gene for each protein was cloned into the newly constructed vector in such a way that expression yielded a protein with an N-terminal affinity tag, for specific, rapid purification. The amino terminus was engineered so that, after purification, the unwanted N-terminal tag can be completely removed from the protein by thrombin cleavage, yielding a natural amino acid sequence for each protein. I have cloned the genes for EF-G and all three release factors into this new expression vector and the genes for all the other protein factors into a pCAL-n expression vector. These constructs will allow our laboratory group to quickly and inexpensively purify all the protein factors needed for the new in vitro termination assay. (Abstract shortened by UMI.) ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Intra-session network coding has been shown to offer significant gains in terms of achievable throughput and delay in settings where one source multicasts data to several clients. In this paper, we consider a more general scenario where multiple sources transmit data to sets of clients over a wireline overlay network. We propose a novel framework for efficient rate allocation in networks where intermediate network nodes have the opportunity to combine packets from different sources using randomized network coding. We formulate the problem as the minimization of the average decoding delay in the client population and solve it with a gradient-based stochastic algorithm. Our optimized inter-session network coding solution is evaluated in different network topologies and is compared with basic intra-session network coding solutions. Our results show the benefits of proper coding decisions and effective rate allocation for lowering the decoding delay when the network is used by concurrent multicast sessions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The relationship between time in dreams and real time has intrigued scientists for centuries. The question if actions in dreams take the same time as in wakefulness can be tested by using lucid dreams where the dreamer is able to mark time intervals with prearranged eye movements that can be objectively identified in EOG recordings. Previous research showed an equivalence of time for counting in lucid dreams and in wakefulness (LaBerge, 1985; Erlacher and Schredl, 2004), but Erlacher and Schredl (2004) found that performing squats required about 40% more time in lucid dreams than in the waking state. To find out if the task modality, the task length, or the task complexity results in prolonged times in lucid dreams, an experiment with three different conditions was conducted. In the first condition, five proficient lucid dreamers spent one to three non-consecutive nights in the sleep laboratory. Participants counted to 10, 20, and 30 in wakefulness and in their lucid dreams. Lucidity and task intervals were time stamped with left-right-left-right eye movements. The same procedure was used for these condition where eight lucid dreamers had to walk 10, 20, or 30 steps. In the third condition, eight lucid dreamers performed a gymnastics routine, which in the waking state lasted the same time as walking 10 steps. Again, we found that performing a motor task in a lucid dream requires more time than in wakefulness. Longer durations in the dream state were present for all three tasks, but significant differences were found only for the tasks with motor activity (walking and gymnastics). However, no difference was found for relative times (no disproportional time effects) and a more complex motor task did not result in more prolonged times. Longer durations in lucid dreams might be related to the lack of muscular feedback or slower neural processing during REM sleep. Future studies should explore factors that might be associated with prolonged durations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Species adapted to cold-climatic mountain environments are expected to face a high risk of range contractions, if not local extinctions under climate change. Yet, the populations of many endothermic species may not be primarily affected by physiological constraints, but indirectly by climate-induced changes of habitat characteristics. In mountain forests, where vertebrate species largely depend on vegetation composition and structure, deteriorating habitat suitability may thus be mitigated or even compensated by habitat management aiming at compositional and structural enhancement. We tested this possibility using four cold-adapted bird species with complementary habitat requirements as model organisms. Based on species data and environmental information collected in 300 1-km2 grid cells distributed across four mountain ranges in central Europe, we investigated (1) how species’ occurrence is explained by climate, landscape, and vegetation, (2) to what extent climate change and climate-induced vegetation changes will affect habitat suitability, and (3) whether these changes could be compensated by adaptive habitat management. Species presence was modelled as a function of climate, landscape and vegetation variables under current climate; moreover, vegetation-climate relationships were assessed. The models were extrapolated to the climatic conditions of 2050, assuming the moderate IPCC-scenario A1B, and changes in species’ occurrence probability were quantified. Finally, we assessed the maximum increase in occurrence probability that could be achieved by modifying one or multiple vegetation variables under altered climate conditions. Climate variables contributed significantly to explaining species occurrence, and expected climatic changes, as well as climate-induced vegetation trends, decreased the occurrence probability of all four species, particularly at the low-altitudinal margins of their distribution. These effects could be partly compensated by modifying single vegetation factors, but full compensation would only be achieved if several factors were changed in concert. The results illustrate the possibilities and limitations of adaptive species conservation management under climate change.