926 resultados para Spinn-Crossover


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study evaluated the administration-time-dependent effects of a stimulant (Dexedrine 5-mg), a sleep-inducer (Halcion 0.25-mg) and placebo (control) on human performance. The investigation was conducted on 12 diurnally active (0700-2300) male adults (23-38 yrs) using a double-blind, randomized sixway-crossover three-treatment, two-timepoint (0830 vs 2030) design. Performance tests were conducted hourly during sleepless 13-hour studies using a computer generated, controlled and scored multi-task cognitive performance assessment battery (PAB) developed at the Walter Reed Army Institute of Research. Specific tests were Simple and Choice Reaction Time, Serial Addition/Subtraction, Spatial Orientation, Logical Reasoning, Time Estimation, Response Timing and the Stanford Sleepiness Scale. The major index of performance was "Throughput", a combined measure of speed and accuracy.^ For the Placebo condition, Single and Group Cosinor Analysis documented circadian rhythms in cognitive performance for the majority of tests, both for individuals and for the group. Performance was best around 1830-2030 and most variable around 0530-0700 when sleepiness was greatest (0300).^ Morning Dexedrine dosing marginally enhanced performance an average of 3% with reference to the corresponding in time control level. Dexedrine AM also increased alertness by 10% over the AM control. Dexedrine PM failed to improve performance with reference to the corresponding PM control baseline. With regard to AM and PM Dexedrine administrations, AM performance was 6% better with subjects 25% more alert.^ Morning Halcion administration caused a 7% performance decrement and 16% increase in sleepiness and a 13% decrement and 10% increase in sleepiness when administered in the evening compared to corresponding in time control data. Performance was 9% worse and sleepiness 24% greater after evening versus morning Halcion administration.^ These results suggest that for evening Halcion dosing, the overnight sleep deprivation occurring in coincidence with the nadir in performance due to circadian rhythmicity together with the CNS depressant effects combine to produce performance degradation. For Dexedrine, morning administration resulted in only marginal performance enhancement; Dexedrine in the evening was less effective, suggesting the 5-mg dose level may be too low to counteract the partial sleep deprivation and nocturnal nadir in performance. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Few recent estimates of childhood asthma incidence exist in the literature, although the importance of incidence surveillance for understanding asthma risk factors has been recognized. Asthma prevalence, morbidity and mortality reports have repeatedly shown that low-income children are disproportionately impacted by the disease. The aim of this study was to demonstrate the utility of Medicaid claims data for providing statewide estimates of asthma incidence. Medicaid Analytic Extract (MAX) data for Texas children ages 0-17 enrolled in Medicaid between 2004 and 2007 were used to estimate incidence overall and by age group, gender, race and county of residence. A 13+ month period of continuous enrollment was required in order to distinguish incident from prevalent cases identified in the claims data. Age-adjusted incidence of asthma was 4.26/100 person-years during 2005-2007, higher than reported in other populations. Incidence rates decreased with age, were higher for males than females, differed by race, and tended to be higher in rural than urban areas. With this study, we were able to demonstrate the utility of MAX data for estimating asthma incidence, and create a dataset of incident cases to use in further analysis. ^ In subsequent analyses, we investigated a possible association between ambient air pollutants and incident asthma among Medicaid-enrolled children in Harris County Texas between 2005 and 2007. This population is at high risk for asthma, and living in an area with historically poor air quality. We used a time-stratified case-crossover design and conditional logistic regression to calculate odds ratios, adjusted for weather variables and aeroallergens, to assess the effect of increases in ozone, NO2 and PM2.5 concentrations on risk of developing asthma. Our results show that a 10 ppb increase in ozone was significantly associated with asthma during the warm season (May-October), with the strongest effect seen when a 6-day cumulative lag period was used to compute the exposure metric (OR=1.05, 95% CI, 1.02–1.08). Similar results were seen for NO2 and PM 2.5 (OR=1.07, 95% CI, 1.03–1.11 and OR=1.12, 95% CI, 1.03–1.22, respectively). PM2.5 also had significant effects in the cold season (November-April), 5-day cumulative lag: OR=1.11, 95% CI, 1.00–1.22. When compared with children in the lowest quartile of O3 exposure, the risk for children in the highest quartile was 20% higher. This study indicates that these pollutants are associated with newly-diagnosed childhood asthma in this low-income urban population, particularly during the summer months. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of smokeless tobacco products is undergoing an alarming resurgence in the United States. Several national surveys have reported a higher prevalence of use among those employed in blue-collar occupations. National objectives now target this group for health promotion programs which reduce the health risks associated with tobacco use.^ Drawn from a larger data set measuring health behaviors, this cross-sectional study tested the applicability of two related theories, the Theory of Reasoned Action (TRA) and the Theory of Planned Behavior (TPB), to smokeless tobacco (SLT) cessation in a blue-collar population of gas pipeline workers. In order to understand the determinants of SLT cessation, measures were obtained of demographic and normative characteristics of the population and specific constructs. Attitude toward the act of quitting (AACT) and subjective norm (SN) are constructs common to both models, perceived behavioral control (PBC) is unique to the TPB, and the number of past quit attempts is not contained in either model. In addition, a self-reported measure was taken of SLT use at two-month follow-up.^ The study population was comprised of all male SLT users who were field employees in a large gas pipeline company with gas compressor stations extending from Texas to the Canadian border. At baseline, 199 employees responded to the SLT portion of the survey, 118 completed some portion of the two-month follow-up, and 101 could be matched across time.^ As hypothesized, significant correlations were found between constructs antecedent to AACT and SN, although crossover effects occurred. Significant differences were found between SLT cessation intenders and non-intenders with regard to their personal and normative beliefs about quitting as well as their outcome expectancies and motivation to comply with others' beliefs. These differences occurred in the expected direction, with the mean intender score consistently higher than that of the non-intender.^ Contrary to hypothesis, AACT predicted intention to quit but SN did not. However, confirmatory of the TPB, PBC, operationalized as self-efficacy, independently contributed to the prediction of intention. Statistically significant relationships were not found between intention, perceived behavioral control, their interactive effects, and use behavior at two-month follow-up. The introduction of number of quit attempts into the logistic regression model resulted in insignificant findings for independent and interactive effects.^ The findings from this study are discussed in relation to their implications for program development and practice, especially within the worksite. In order to confirm and extend the findings of this investigation, recommendations for future research are also discussed. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interstitial waters were squeezed from strata recovered at Sites 637-641 of ODP Leg 103 on the Galicia margin, along the northwestern Iberian continental margin in the northeast Atlantic. Chemical profiles of Site 638 show the most complexity, which appears to be related to an unconformity in the strata between Cretaceous and Neogene sediments and to rapid deposition of Cretaceous syn-rift sediments upon pre-rift strata. Analyses of waters from all of the Leg 103 sites show generally antithetical trends for calcium and magnesium; calcium increases with depth as magnesium decreases. No calcium-magnesium 'crossover' profiles are observed in these data. Data from Site 637 show an unusual pattern; calcium increases with increasing depth, but magnesium remains relatively constant. Sulfate is either stable or shows an overall decrease with depth, and boron profiles show some structure. At all but one site (Site 638), strontium profiles do not show marked depth structure. The structure of alkalinity and silica profiles is highly site dependent. Bromide profiles are, in general, constant. In nearly every case, observed bromide concentrations are near average seawater values. Relatively low concentrations of iron and manganese are common within the upper 10 m of the sediment sequence and typically are near detection limits at deeper depths

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Paleocene/Eocene Thermal Maximum (PETM, ca. 55 Ma) is an abrupt, profound perturbation of climate and the carbon cycle associated with a massive injection of isotopically light carbon into the ocean-atmosphere system. As such, it provides an analogue for understanding the interplay between phytoplankton and climate under modern anthropogenic global-warming conditions. However, the accompanying enhanced dissolution poses uncertainty on the reconstruction of the affected ecology and productivity. We present a high-resolution record of bulk isotopes and nannofossil absolute abundance from Ocean Drilling Program (ODP) Site 1135 on the Kerguelen Plateau, Southern Indian Ocean to quantitatively constrain for the first time the influence of dissolution on paleoecological reconstruction. Our bulk-carbonate isotope record closely resembles that of the classic PETM site at ODP Site 690 on the opposite side of the Antarctic continent, and its correlation with those from ODP Sites 690, 1262 and 1263 records allows recognition of 14 precessional cycles upsection from the onset of the carbon isotopic excursion (CIE). This, together with a full range of common Discoasteraraneus and an abundance crossover between Fasciculithus and Zygrhablithusbijugatus, indicates the presence of the PETM at Site 1135, a poorly known record with calcareous fossils throughout the interval. The strong correlation between the absolute abundances of Chiasmolithus and coccolith assemblages reveals a dominant paleoecological signal in the poorly preserved fossil assemblages, while the influence of dissolution is only strong during the CIE. This suggests that r-selected taxa can preserve faithful ecological information even in the severely-altered assemblages studied here, and therefore provide a strong case for the application of nannofossils to paleoecological studies in better-preserved PETM sections. The inferred nannoplankton productivity drops abruptly at the CIE onset, but rapidly increases after the CIE peak, both of which may be driven by nutrient availability related to ocean stratification and vertical mixing due to changed sea-surface temperatures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Con el surgir de los problemas irresolubles de forma eficiente en tiempo polinomial en base al dato de entrada, surge la Computación Natural como alternativa a la computación clásica. En esta disciplina se trata de o bien utilizar la naturaleza como base de cómputo o bien, simular su comportamiento para obtener mejores soluciones a los problemas que los encontrados por la computación clásica. Dentro de la computación natural, y como una representación a nivel celular, surge la Computación con Membranas. La primera abstracción de las membranas que se encuentran en las células, da como resultado los P sistemas de transición. Estos sistemas, que podrían ser implementados en medios biológicos o electrónicos, son la base de estudio de esta Tesis. En primer lugar, se estudian las implementaciones que se han realizado, con el fin de centrarse en las implementaciones distribuidas, que son las que pueden aprovechar las características intrínsecas de paralelismo y no determinismo. Tras un correcto estudio del estado actual de las distintas etapas que engloban a la evolución del sistema, se concluye con que las distribuciones que buscan un equilibrio entre las dos etapas (aplicación y comunicación), son las que mejores resultados presentan. Para definir estas distribuciones, es necesario definir completamente el sistema, y cada una de las partes que influyen en su transición. Además de los trabajos de otros investigadores, y junto a ellos, se realizan variaciones a los proxies y arquitecturas de distribución, para tener completamente definidos el comportamiento dinámico de los P sistemas. A partir del conocimiento estático –configuración inicial– del P sistema, se pueden realizar distribuciones de membranas en los procesadores de un clúster para obtener buenos tiempos de evolución, con el fin de que la computación del P sistema sea realizada en el menor tiempo posible. Para realizar estas distribuciones, hay que tener presente las arquitecturas –o forma de conexión– de los procesadores del clúster. La existencia de 4 arquitecturas, hace que el proceso de distribución sea dependiente de la arquitectura a utilizar, y por tanto, aunque con significativas semejanzas, los algoritmos de distribución deben ser realizados también 4 veces. Aunque los propulsores de las arquitecturas han estudiado el tiempo óptimo de cada arquitectura, la inexistencia de distribuciones para estas arquitecturas ha llevado a que en esta Tesis se probaran las 4, hasta que sea posible determinar que en la práctica, ocurre lo mismo que en los estudios teóricos. Para realizar la distribución, no existe ningún algoritmo determinista que consiga una distribución que satisfaga las necesidades de la arquitectura para cualquier P sistema. Por ello, debido a la complejidad de dicho problema, se propone el uso de metaheurísticas de Computación Natural. En primer lugar, se propone utilizar Algoritmos Genéticos, ya que es posible realizar alguna distribución, y basada en la premisa de que con la evolución, los individuos mejoran, con la evolución de dichos algoritmos, las distribuciones también mejorarán obteniéndose tiempos cercanos al óptimo teórico. Para las arquitecturas que preservan la topología arbórea del P sistema, han sido necesarias realizar nuevas representaciones, y nuevos algoritmos de cruzamiento y mutación. A partir de un estudio más detallado de las membranas y las comunicaciones entre procesadores, se ha comprobado que los tiempos totales que se han utilizado para la distribución pueden ser mejorados e individualizados para cada membrana. Así, se han probado los mismos algoritmos, obteniendo otras distribuciones que mejoran los tiempos. De igual forma, se han planteado el uso de Optimización por Enjambres de Partículas y Evolución Gramatical con reescritura de gramáticas (variante de Evolución Gramatical que se presenta en esta Tesis), para resolver el mismo cometido, obteniendo otro tipo de distribuciones, y pudiendo realizar una comparativa de las arquitecturas. Por último, el uso de estimadores para el tiempo de aplicación y comunicación, y las variaciones en la topología de árbol de membranas que pueden producirse de forma no determinista con la evolución del P sistema, hace que se deba de monitorizar el mismo, y en caso necesario, realizar redistribuciones de membranas en procesadores, para seguir obteniendo tiempos de evolución razonables. Se explica, cómo, cuándo y dónde se deben realizar estas modificaciones y redistribuciones; y cómo es posible realizar este recálculo. Abstract Natural Computing is becoming a useful alternative to classical computational models since it its able to solve, in an efficient way, hard problems in polynomial time. This discipline is based on biological behaviour of living organisms, using nature as a basis of computation or simulating nature behaviour to obtain better solutions to problems solved by the classical computational models. Membrane Computing is a sub discipline of Natural Computing in which only the cellular representation and behaviour of nature is taken into account. Transition P Systems are the first abstract representation of membranes belonging to cells. These systems, which can be implemented in biological organisms or in electronic devices, are the main topic studied in this thesis. Implementations developed in this field so far have been studied, just to focus on distributed implementations. Such distributions are really important since they can exploit the intrinsic parallelism and non-determinism behaviour of living cells, only membranes in this case study. After a detailed survey of the current state of the art of membranes evolution and proposed algorithms, this work concludes that best results are obtained using an equal assignment of communication and rules application inside the Transition P System architecture. In order to define such optimal distribution, it is necessary to fully define the system, and each one of the elements that influence in its transition. Some changes have been made in the work of other authors: load distribution architectures, proxies definition, etc., in order to completely define the dynamic behaviour of the Transition P System. Starting from the static representation –initial configuration– of the Transition P System, distributions of membranes in several physical processors of a cluster is algorithmically done in order to get a better performance of evolution so that the computational complexity of the Transition P System is done in less time as possible. To build these distributions, the cluster architecture –or connection links– must be considered. The existence of 4 architectures, makes that the process of distribution depends on the chosen architecture, and therefore, although with significant similarities, the distribution algorithms must be implemented 4 times. Authors who proposed such architectures have studied the optimal time of each one. The non existence of membrane distributions for these architectures has led us to implement a dynamic distribution for the 4. Simulations performed in this work fix with the theoretical studies. There is not any deterministic algorithm that gets a distribution that meets the needs of the architecture for any Transition P System. Therefore, due to the complexity of the problem, the use of meta-heuristics of Natural Computing is proposed. First, Genetic Algorithm heuristic is proposed since it is possible to make a distribution based on the premise that along with evolution the individuals improve, and with the improvement of these individuals, also distributions enhance, obtaining complexity times close to theoretical optimum time. For architectures that preserve the tree topology of the Transition P System, it has been necessary to make new representations of individuals and new algorithms of crossover and mutation operations. From a more detailed study of the membranes and the communications among processors, it has been proof that the total time used for the distribution can be improved and individualized for each membrane. Thus, the same algorithms have been tested, obtaining other distributions that improve the complexity time. In the same way, using Particle Swarm Optimization and Grammatical Evolution by rewriting grammars (Grammatical Evolution variant presented in this thesis), to solve the same distribution task. New types of distributions have been obtained, and a comparison of such genetic and particle architectures has been done. Finally, the use of estimators for the time of rules application and communication, and variations in tree topology of membranes that can occur in a non-deterministic way with evolution of the Transition P System, has been done to monitor the system, and if necessary, perform a membrane redistribution on processors to obtain reasonable evolution time. How, when and where to make these changes and redistributions, and how it can perform this recalculation, is explained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este artículo propone un método para llevar a cabo la calibración de las familias de discontinuidades en macizos rocosos. We present a novel approach for calibration of stochastic discontinuity network parameters based on genetic algorithms (GAs). To validate the approach, examples of application of the method to cases with known parameters of the original Poisson discontinuity network are presented. Parameters of the model are encoded as chromosomes using a binary representation, and such chromosomes evolve as successive generations of a randomly generated initial population, subjected to GA operations of selection, crossover and mutation. Such back-calculated parameters are employed to make assessments about the inference capabilities of the model using different objective functions with different probabilities of crossover and mutation. Results show that the predictive capabilities of GAs significantly depend on the type of objective function considered; and they also show that the calibration capabilities of the genetic algorithm can be acceptable for practical engineering applications, since in most cases they can be expected to provide parameter estimates with relatively small errors for those parameters of the network (such as intensity and mean size of discontinuities) that have the strongest influence on many engineering applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: This study assessed the efficacy of a closed-loop (CL) system consisting of a predictive rule-based algorithm (pRBA) on achieving nocturnal and postprandial normoglycemia in patients with type 1 diabetes mellitus (T1DM). The algorithm is personalized for each patient’s data using two different strategies to control nocturnal and postprandial periods. Research Design and Methods: We performed a randomized crossover clinical study in which 10 T1DM patients treated with continuous subcutaneous insulin infusion (CSII) spent two nonconsecutive nights in the research facility: one with their usual CSII pattern (open-loop [OL]) and one controlled by the pRBA (CL). The CL period lasted from 10 p.m. to 10 a.m., including overnight control, and control of breakfast. Venous samples for blood glucose (BG) measurement were collected every 20 min. Results: Time spent in normoglycemia (BG, 3.9–8.0 mmol/L) during the nocturnal period (12 a.m.–8 a.m.), expressed as median (interquartile range), increased from 66.6% (8.3–75%) with OL to 95.8% (73–100%) using the CL algorithm (P<0.05). Median time in hypoglycemia (BG, <3.9 mmol/L) was reduced from 4.2% (0–21%) in the OL night to 0.0% (0.0–0.0%) in the CL night (P<0.05). Nine hypoglycemic events (<3.9 mmol/L) were recorded with OL compared with one using CL. The postprandial glycemic excursion was not lower when the CL system was used in comparison with conventional preprandial bolus: time in target (3.9–10.0 mmol/L) 58.3% (29.1–87.5%) versus 50.0% (50–100%). Conclusions: A highly precise personalized pRBA obtains nocturnal normoglycemia, without significant hypoglycemia, in T1DM patients. There appears to be no clear benefit of CL over prandial bolus on the postprandial glycemia

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two sheep and two goats, fitted with a ruminal cannula, received two diets composed of 30% concentrate and 70% of either alfalfa hay (AL) or grass hay (GR) as forage in a two-period crossover design. Solid and liquid phases of the rumen were sampled from each animal immediately before feeding and 4 h post-feeding. Pellets containing solid associated bacteria (SAB) and liquid associated bacteria (LAB) were isolated from the corresponding ruminal phase and composited by time to obtain 2 pellets per animal (one SAB and one LAB) before DNA extraction. Denaturing gradient gel electrophoresis (DGGE) analysis of 16S ribosomal DNA was used to analyze bacterial diversity. A total of 78 and 77 bands were detected in the DGGE gel from sheep and goats samples, respectively. There were 18 bands only found in the pellets from sheep fed AL-fed sheep and 7 found exclusively in samples from sheep fed the GR diet. In goats, 21 bands were found only in animals fed the AL diet and 17 were found exclusively in GR-fed ones. In all animals, feeding AL diet tended (P < 0.10) to promote greater NB and SI in LAB and SAB pellets compared with the GR diet. The dendrogram generated by the cluster analysis showed that in both animal species all samples can be included in two major clusters. The four SAB pellets within each animal species clustered together and the four LAB pellets grouped in a different cluster. Moreover, SAB and LAB clusters contained two clear subclusters according to forage type. Results show that in all animals bacterial diversity was more markedly affected by the ruminal phase (solid vs. liquid) than by the type of forage in the diet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Propulsion and power generation by bare electrodynamic tethers are revisited in a unified way and issues and constraints are addressed. In comparing electrodynamic tethers, which do not use propellant, with other propellantconsuming systems, mission duration is a discriminator that defines crossover points for systems with equal initial masses. Bare tethers operating in low Earth orbit can be more competitive than optimum ion thrusters in missions exceeding two-three days for orbital deboost and three weeks for boosting operations. If the tether produces useful onboard power during deboost, the crossover point reaches to about 10 days. Power generation by means of a bare electrodynamic tether in combination with chemical propulsion to maintain orbital altitude of the system is more efficient than use of the same chemicals (liquid hydrogen and liquid oxygen) in a fuel cell to produce power for missions longer than one week. Issues associated with tether temperature, bowing, deployment, and arcing are also discussed. Heating/cooling rates reach about 4 K/s for a 0.05-mm-thick tape and a fraction of Kelvin/second for the ProSEDS (0.6-mm-radius) wire; under dominant ohmic effects, temperatures areover200K (night) and 380 K (day) for the tape and 320 and 415 K for that wire. Tether applications other than propulsion and power are briefly discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, we show how number theoretical problems can be fruitfully approached with the tools of statistical physics. We focus on g-Sidon sets, which describe sequences of integers whose pairwise sums are different, and propose a random decision problem which addresses the probability of a random set of k integers to be g-Sidon. First, we provide numerical evidence showing that there is a crossover between satisfiable and unsatisfiable phases which converts to an abrupt phase transition in a properly defined thermodynamic limit. Initially assuming independence, we then develop a mean-field theory for the g-Sidon decision problem. We further improve the mean-field theory, which is only qualitatively correct, by incorporating deviations from independence, yielding results in good quantitative agreement with the numerics for both finite systems and in the thermodynamic limit. Connections between the generalized birthday problem in probability theory, the number theory of Sidon sets and the properties of q-Potts models in condensed matter physics are briefly discussed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An analysis of the structure of flame balls encountered under microgravity conditions, which are stable due to radiant energy losses from H₂O, is carried out for fuel-lean hydrogen-air mixtures. It is seen that, because of radiation losses, in stable flame balls the maximum flame temperature remains close to the crossover temperature, at which the rate of the branching step H + O₂ -> OH + O equals that of the recombination step H + O₂ + M -> HO₂ + M. Under those conditions, all chemical intermediates have very small concentrations and follow the steady-state approximation, while the main species react according to the overall step 2H₂ + O₂-> 2H₂O; so that a one-step chemical-kinetic description, recently derived by asymptotic analysis for near-limit fuel-lean deflagrations, can be used with excellent accuracy to describe the whole branch of stable flame balls. Besides molecular diffusion in a binary-diffusion approximation, Soret diffusion is included, since this exerts a nonnegligible effect to extend the flammability range. When the large value of the activation energy of the overall reaction is taken into account, the leading-order analysis in the reaction-sheet approximation is seen to determine the flame ball radius as that required for radiant heat losses to remove enough of the heat released by chemical reaction at the flame to keep the flame temperature at a value close to crossover. The results are relevant to burning velocities at lean equivalent ratios and may influence fire-safety issues associated with hydrogen utilization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been reasoned that the structures of strongly cellular flames in very lean mixtures approach an array of flame balls, each burning as if it were isolated, thereby indicating a connection between the critical conditions required for existence of steady flame balls and those necessary for occurrence of self-sustained premixed combustion. This is the starting assumption of the present study, in which structures of near-limit steady sphericosym-metrical flame balls are investigated with the objective of providing analytic expressions for critical combustion conditions in ultra-lean hydrogen-oxygen mixtures diluted with N2 and water vapor. If attention were restricted to planar premixed flames, then the lean-limit mole fraction of H2 would be found to be roughly ten percent, more than twice the observed flammability limits, thereby emphasizing the relevance of the flame-ball phenomena. Numerical integrations using detailed models for chemistry and radiation show that a onestep chemical-kinetic reduced mechanism based on steady-state assumptions for all chemical intermediates, together with a simple, optically thin approximation for water-vapor radiation, can be used to compute near-limit fuel-lean flame balls with excellent accuracy. The previously developed one-step reaction rate includes a crossover temperature that determines in the first approximation a chemical-kinetic lean limit below which combustión cannot occur, with critical conditions achieved when the diffusion-controlled radiation-free peak temperature, computed with account taken of hydrogen Soret diffusion, is equal to the crossover temperature. First-order corrections are found by activation-energy asymptotics in a solution that involves a near-field radiation-free zone surrounding a spherical flame sheet, together with a far-field radiation-conduction balance for the temperature profile. Different scalings are found depending on whether or not the surrounding atmosphere contains wáter vapor, leading to different analytic expressions for the critical conditions for flame-ball existence, which give results in very good agreement with those obtained by detailed numerical computations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Combining transcranial magnetic stimulation (TMS) and electroencephalography (EEG) constitutes a powerful tool to directly assess human cortical excitability and connectivity. TMS of the primary motor cortex elicits a sequence of TMS-evoked EEG potentials (TEPs). It is thought that inhibitory neurotransmission through GABA-A receptors (GABAAR) modulates early TEPs (<50 ms after TMS), whereas GABA-B receptors (GABABR) play a role for later TEPs (at ∼100 ms after TMS). However, the physiological underpinnings of TEPs have not been clearly elucidated yet. Here, we studied the role of GABAA/B-ergic neurotransmission for TEPs in healthy subjects using a pharmaco-TMS-EEG approach. In Experiment 1, we tested the effects of a single oral dose of alprazolam (a classical benzodiazepine acting as allosteric-positive modulator at α1, α2, α3, and α5 subunit-containing GABAARs) and zolpidem (a positive modulator mainly at the α1 GABAAR) in a double-blind, placebo-controlled, crossover study. In Experiment 2, we tested the influence of baclofen (a GABABR agonist) and diazepam (a classical benzodiazepine) versus placebo on TEPs. Alprazolam and diazepam increased the amplitude of the negative potential at 45 ms after stimulation (N45) and decreased the negative component at 100 ms (N100), whereas zolpidem increased the N45 only. In contrast, baclofen specifically increased the N100 amplitude. These results provide strong evidence that the N45 represents activity of α1-subunit-containing GABAARs, whereas the N100 represents activity of GABABRs. Findings open a novel window of opportunity to study alteration of GABAA-/GABAB-related inhibition in disorders, such as epilepsy or schizophrenia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hybrid magnetic arrays embedded in superconducting films are ideal systems to study the competition between different physical (such as the coherence length) and structural length scales such as are available in artificially produced structures. This interplay leads to oscillation in many magnetically dependent superconducting properties such as the critical currents, resistivity and magnetization. These effects are generally analyzed using two distinct models based on vortex pinning or wire network. In this work, we show that for magnetic dot arrays, as opposed to antidot (i.e. holes) arrays, vortex pinning is the main mechanism for field induced oscillations in resistance R(H), critical current Ic(H), magnetization M(H) and ac-susceptibility χ ac(H) in a broad temperature range. Due to the coherence length divergence at Tc, a crossover to wire network behaviour is experimentally found. While pinning occurs in a wide temperature range up to Tc, wire network behaviour is only present in a very narrow temperature window close to Tc. In this temperature interval, contributions from both mechanisms are operational but can be experimentally distinguished.