923 resultados para Expense caloric


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research examines the graduation rate experienced by students receiving public education services in the state of Texas. Special attention is paid to that subgroup of Texas students who meet Texas Education Agency criteria for handicapped status. The study is guided by two research questions: What are the high school completion rates experienced by handicapped and nonhandicapped students attending Texas public schools? and What are the predictors of graduation for handicapped and nonhandicapped students?^ In addition, the following hypotheses are explored. Hypothesis 1: Handicapped students attending a Texas public school will experience a lower rate of high school completion than their nonhandicapped counterparts. Hypothesis 2: Handicapped and nonhandicapped students attending school in a Texas public school with a budget above the median budget for Texas public schools will experience a higher rate of high school completion than similar students in Texas public schools with a budget below the median budget. Hypothesis 3: Handicapped and nonhandicapped students attending school in large Texas urban areas will experience a lower rate of high school completion than similar students in Texas public schools in rural areas. Hypothesis 4: Handicapped and nonhandicapped students attending a Texas public school in a county which rates above the state median for food stamps and AFDC recipients will experience a lower rate of high school completion than students living in counties below the median.^ The study will employ extant data from the records of the Texas Education Agency for the 1988-1989 and the 1989-1990 school years, from the Texas Department of Health for the years of 1989 and 1990, and from the 1980 Census.^ The study reveals that nonhandicapped students are graduating with a two year average rate of.906, while handicapped students following an Individualized Educational Program (IEP) achieve a two year average rate of.532, and handicapped students following the regular academic program present a two year average graduation rate of only.371. The presence of other handicapped students, and the school district's average expense per student are found to contribute significantly to the completion rates of handicapped students. Size groupings are used to elucidate the various impacts of these variables on different school districts and different student groups.^ Conclusions and implications are offered regarding the need to reach national consensus on the definition and computation of high school completion for both handicapped and nonhandicapped students, and the need for improved statewide tracking of handicapped completion rates. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While many studies have been conducted in mountainous catchments to examine the impact of climate change on hydrology, the interactions between climate changes and land use components have largely unknown impacts on hydrology in alpine regions. They need to be given special attention in order to devise possible strategies concerning general development in these regions. Thus, the main aim was to examine the impact of land use (i.e. bushland expansion) and climate changes (i.e. increase of temperature) on hydrology by model simulations. For this purpose, the physically based WaSiM-ETH model was applied to the catchment of Ursern Valley in the central Alps (191 km2) over the period of 1983−2005. Modelling results showed that the reduction of the mean monthly discharge during the summer period is due primarily to the retreat of snow discharge in time and secondarily to the reduction in the glacier surface area together with its retreat in time, rather than the increase in the evapotranspiration due to the expansion of the “green alder” on the expense of grassland. The significant decrease in summer discharge during July, August and September shows a change in the regime from b-glacio-nival to nivo-glacial. These changes are confirmed by the modeling results that attest to a temporal shift in snowmelt and glacier discharge towards earlier in the year: March, April and May for snowmelt and May and June for glacier discharge. It is expected that the yearly total discharge due to the land use changes will be reduced by 0.6% in the near future, whereas, it will be reduced by about 5% if climate change is also taken into account. Copyright © 2013 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Though IP multicast is resource ef£cient in delivering data to a group of members simultaneously, it suffers from scalability problem with the number of concurrently active multicast groups because it requires a router to keep forwarding state for every multicast tree passing through it. To solve this state scalability problem, we proposed a scheme, called aggregated multicast. The key idea is that multiple groups are forced to share a single delivery tree. In our earlier work, we introduced the basic concept of aggregated multicast and presented some initial results to show that multicast state can be reduced. In this paper, we develop a more quantitative assessment of the cost/bene£t trade-offs. We propose an algorithm to assign multicast groups to delivery trees with controllable cost and introduce metrics to measure multicast state and tree management overhead for multicast schemes. We then compare aggregated multicast with conventional multicast schemes, such as source speci£c tree scheme and shared tree scheme. Our extensive simulations show that aggregated multicast can achieve signi£cant routing state and tree management overhead reduction while containing the expense of extra resources (bandwidth waste and tunnelling overhead). We conclude that aggregated multicast is a very cost-effective and promising direction for scalable transit domain multicast provisioning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The vestibular system contributes to the control of posture and eye movements and is also involved in various cognitive functions including spatial navigation and memory. These functions are subtended by projections to a vestibular cortex, whose exact location in the human brain is still a matter of debate (Lopez and Blanke, 2011). The vestibular cortex can be defined as the network of all cortical areas receiving inputs from the vestibular system, including areas where vestibular signals influence the processing of other sensory (e.g. somatosensory and visual) and motor signals. Previous neuroimaging studies used caloric vestibular stimulation (CVS), galvanic vestibular stimulation (GVS), and auditory stimulation (clicks and short-tone bursts) to activate the vestibular receptors and localize the vestibular cortex. However, these three methods differ regarding the receptors stimulated (otoliths, semicircular canals) and the concurrent activation of the tactile, thermal, nociceptive and auditory systems. To evaluate the convergence between these methods and provide a statistical analysis of the localization of the human vestibular cortex, we performed an activation likelihood estimation (ALE) meta-analysis of neuroimaging studies using CVS, GVS, and auditory stimuli. We analyzed a total of 352 activation foci reported in 16 studies carried out in a total of 192 healthy participants. The results reveal that the main regions activated by CVS, GVS, or auditory stimuli were located in the Sylvian fissure, insula, retroinsular cortex, fronto-parietal operculum, superior temporal gyrus, and cingulate cortex. Conjunction analysis indicated that regions showing convergence between two stimulation methods were located in the median (short gyrus III) and posterior (long gyrus IV) insula, parietal operculum and retroinsular cortex (Ri). The only area of convergence between all three methods of stimulation was located in Ri. The data indicate that Ri, parietal operculum and posterior insula are vestibular regions where afferents converge from otoliths and semicircular canals, and may thus be involved in the processing of signals informing about body rotations, translations and tilts. Results from the meta-analysis are in agreement with electrophysiological recordings in monkeys showing main vestibular projections in the transitional zone between Ri, the insular granular field (Ig), and SII.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A dose-response strategy may not only allow investigation of the impact of foods and nutrients on human health but may also reveal differences in the response of individuals to food ingestion based on their metabolic health status. In a randomized crossover study, we challenged 19 normal-weight (BMI: 20-25 kg/m(2)) and 18 obese (BMI: >30 kg/m(2)) men with 500, 1000, and 1500 kcal of a high-fat (HF) meal (60.5% energy from fat). Blood was taken at baseline and up to 6 h postprandially and analyzed for a range of metabolic, inflammatory, and hormonal variables, including plasma glucose, lipids, and C-reactive protein and serum insulin, glucagon-like peptide-1, interleukin-6 (IL-6), and endotoxin. Insulin was the only variable that could differentiate the postprandial response of normal-weight and obese participants at each of the 3 caloric doses. A significant response of the inflammatory marker IL-6 was only observed in the obese group after ingestion of the HF meal containing 1500 kcal [net incremental AUC (net iAUC) = 22.9 ± 6.8 pg/mL × 6 h, P = 0.002]. Furthermore, the net iAUC for triglycerides significantly increased from the 1000 to the 1500 kcal meal in the obese group (5.0 ± 0.5 mmol/L × 6 h vs. 6.0 ± 0.5 mmol/L × 6 h, P = 0.015) but not in the normal-weight group (4.3 ± 0.5 mmol/L × 6 h vs. 4.8 ± 0.5 mmol/L × 6 h, P = 0.31). We propose that caloric dose-response studies may contribute to a better understanding of the metabolic impact of food on the human organism. This study was registered at clinicaltrials.gov as NCT01446068.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stable oxygen isotope composition of atmospheric precipitation (δ18Op) was scrutinized from 39 stations distributed over Switzerland and its border zone. Monthly amount-weighted δ18Op values averaged over the 1995–2000 period showed the expected strong linear altitude dependence (−0.15 to −0.22‰ per 100 m) only during the summer season (May–September). Steeper gradients (~ −0.56 to −0.60‰ per 100 m) were observed for winter months over a low elevation belt, while hardly any altitudinal difference was seen for high elevation stations. This dichotomous pattern could be explained by the characteristically shallower vertical atmospheric mixing height during winter season and provides empirical evidence for recently simulated effects of stratified atmospheric flow on orographic precipitation isotopic ratios. This helps explain "anomalous" deflected altitudinal water isotope profiles reported from many other high relief regions. Grids and isotope distribution maps of the monthly δ18Op have been calculated over the study region for 1995–1996. The adopted interpolation method took into account both the variable mixing heights and the seasonal difference in the isotopic lapse rate and combined them with residual kriging. The presented data set allows a point estimation of δ18Op with monthly resolution. According to the test calculations executed on subsets, this biannual data set can be extended back to 1992 with maintained fidelity and, with a reduced station subset, even back to 1983 at the expense of faded reliability of the derived δ18Op estimates, mainly in the eastern part of Switzerland. Before 1983, reliable results can only be expected for the Swiss Plateau since important stations representing eastern and south-western Switzerland were not yet in operation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND Multiple acyl-CoA dehydrogenase deficiency- (MADD-), also called glutaric aciduria type 2, associated leukodystrophy may be severe and progressive despite conventional treatment with protein- and fat-restricted diet, carnitine, riboflavin, and coenzyme Q10. Administration of ketone bodies was described as a promising adjunct, but has only been documented once. METHODS We describe a Portuguese boy of consanguineous parents who developed progressive muscle weakness at 2.5 y of age, followed by severe metabolic decompensation with hypoglycaemia and coma triggered by a viral infection. Magnetic resonance (MR) imaging showed diffuse leukodystrophy. MADD was diagnosed by biochemical and molecular analyses. Clinical deterioration continued despite conventional treatment. Enteral sodium D,L-3-hydroxybutyrate (NaHB) was progressively introduced and maintained at 600 mg/kg BW/d (≈3% caloric need). Follow up was 3 y and included regular clinical examinations, biochemical studies, and imaging. RESULTS During follow up, the initial GMFC-MLD (motor function classification system, 0 = normal, 6 = maximum impairment) level of 5-6 gradually improved to 1 after 5 mo. Social functioning and quality of life recovered remarkably. We found considerable improvement of MR imaging and spectroscopy during follow up, with a certain lag behind clinical recovery. There was some persistent residual developmental delay. CONCLUSION NaHB is a highly effective and safe treatment that needs further controlled studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The choice and duration of antiplatelet therapy for secondary prevention of coronary artery disease (CAD) is determined by the clinical context and treatment strategy. Oral antiplatelet agents for secondary prevention include the cyclo-oxygenase-1 inhibitor aspirin, and the ADP dependent P2Y12 inhibitors clopidogrel, prasugrel and ticagrelor. Aspirin constitutes the cornerstone in secondary prevention of CAD and is complemented by clopidogrel in patients with stable CAD undergoing percutaneous coronary intervention. Among patients with acute coronary syndrome, prasugrel and ticagrelor improve net clinical outcome by reducing ischaemic adverse events at the expense of an increased risk of bleeding as compared with clopidogrel. Prasugrel appears particularly effective among patients with ST elevation myocardial infarction to reduce the risk of stent thrombosis compared with clopidogrel, and offered a greater net clinical benefit among patients with diabetes compared with patients without diabetes. Ticagrelor is associated with reduced mortality without increasing the rate of coronary artery bypass graft (CABG)-related bleeding as compared with clopidogrel. Dual antiplatelet therapy should be continued for a minimum of 1 year among patients with acute coronary syndrome irrespective of stent type; among patients with stable CAD treated with new generation drug-eluting stents, available data suggest no benefit to prolong antiplatelet treatment beyond 6 months.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Up to one third of the general population suffers from symptoms caused by hemorrhoids. Conservative treatment comes first unless the patient presents with an acute hemorrhoidal prolapse or a thrombosis. A fiber enriched diet is the primary treatment option, recommended in the perioperative period as well as a long-term prophylaxis. A timely limited application of topical ointments or suppositories and/or flavonoids are further treatment options. When symptoms persist interventional procedures for grade I-II hemorrhoids, and surgery for grade III-IV hemorrhoids should be considered. Rubber band ligation is the interventional treatment of choice. A comparable efficacy using sclerosing or infrared therapy has not yet been demonstrated. We therefore do not recommend these treatment options for the cure of hemorrhoids. Self-treatment by anal insertion of bougies is of lowrisk and may be successful, particularly in the setting of an elevated sphincter pressure. Anal dilation, sphincterotomy, cryosurgery, bipolar diathermy, galvanic electrotherapy, and heat therapy should be regarded as obsolete given the poor or missing data reported for these methods. For a long time, the classic excisional hemorrhoidectomy was considered to be the gold standard as far as surgical procedures are concerned. Primary closure (Ferguson) seems to be superior compared to the "open" version (Milligan Morgan) with respect to postoperative pain and wound healing. The more recently proposed stapled hemorrhoidopexy (Longo) is particularly advisable for circular hemorrhoids. Compared to excisional hemorrhoidectomy the Longo-operation is associated with reduced postoperative pain, shorter operation time and hospital stay as well as a faster recovery, with the disadvantage though of a higher recurrence rate. Data from Hemorrhoidal Artery Ligation (HAL)-, if appropriate in combination with a Recto-Anal Repair (HAL/RAR)-, demonstrates a similar trend towards a better tolerance of the procedure at the expense of a higher recurrence rate. These relatively "new" procedures equally qualify for the treatment of grade III and IV hemorrhoids, and, in the case of stapled hemorrhoidopexy, may even be employed in the emergency situation of an acute anal prolapse. While under certain circumstances different treatment options are equivalent, there is a clear specificity with respect to the application of those procedures in other situations. The respective pros and cons need to be discussed separately with every patient. According to their own requirements a treatment strategy has to be defined according to their individual requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE The aim of this work is to derive a theoretical framework for quantitative noise and temporal fidelity analysis of time-resolved k-space-based parallel imaging methods. THEORY An analytical formalism of noise distribution is derived extending the existing g-factor formulation for nontime-resolved generalized autocalibrating partially parallel acquisition (GRAPPA) to time-resolved k-space-based methods. The noise analysis considers temporal noise correlations and is further accompanied by a temporal filtering analysis. METHODS All methods are derived and presented for k-t-GRAPPA and PEAK-GRAPPA. A sliding window reconstruction and nontime-resolved GRAPPA are taken as a reference. Statistical validation is based on series of pseudoreplica images. The analysis is demonstrated on a short-axis cardiac CINE dataset. RESULTS The superior signal-to-noise performance of time-resolved over nontime-resolved parallel imaging methods at the expense of temporal frequency filtering is analytically confirmed. Further, different temporal frequency filter characteristics of k-t-GRAPPA, PEAK-GRAPPA, and sliding window are revealed. CONCLUSION The proposed analysis of noise behavior and temporal fidelity establishes a theoretical basis for a quantitative evaluation of time-resolved reconstruction methods. Therefore, the presented theory allows for comparison between time-resolved parallel imaging methods and also nontime-resolved methods. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aims at assessing the skill of several climate field reconstruction techniques (CFR) to reconstruct past precipitation over continental Europe and the Mediterranean at seasonal time scales over the last two millennia from proxy records. A number of pseudoproxy experiments are performed within the virtual reality ofa regional paleoclimate simulation at 45 km resolution to analyse different aspects of reconstruction skill. Canonical Correlation Analysis (CCA), two versions of an Analog Method (AM) and Bayesian hierarchical modeling (BHM) are applied to reconstruct precipitation from a synthetic network of pseudoproxies that are contaminated with various types of noise. The skill of the derived reconstructions is assessed through comparison with precipitation simulated by the regional climate model. Unlike BHM, CCA systematically underestimates the variance. The AM can be adjusted to overcome this shortcoming, presenting an intermediate behaviour between the two aforementioned techniques. However, a trade-off between reconstruction-target correlations and reconstructed variance is the drawback of all CFR techniques. CCA (BHM) presents the largest (lowest) skill in preserving the temporal evolution, whereas the AM can be tuned to reproduce better correlation at the expense of losing variance. While BHM has been shown to perform well for temperatures, it relies heavily on prescribed spatial correlation lengths. While this assumption is valid for temperature, it is hardly warranted for precipitation. In general, none of the methods outperforms the other. All experiments agree that a dense and regularly distributed proxy network is required to reconstruct precipitation accurately, reflecting its high spatial and temporal variability. This is especially true in summer, when a specifically short de-correlation distance from the proxy location is caused by localised summertime convective precipitation events.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Auxin (IAA) is an important regulator of plant development and root differentiation. Although recent studies indicate that salicylic acid (SA) may also be important in this context by interfering with IAA signaling, comparatively little is known about its impact on the plant’s physiology, metabolism, and growth characteristics. Using carbon-11, a short-lived radioisotope (t 1/2 = 20.4 min) administered as 11CO2 to maize plants (B73), we measured changes in these functions using SA and IAA treatments. IAA application decreased total root biomass, though it increased lateral root growth at the expense of primary root elongation. IAA-mediated inhibition of root growth was correlated with decreased 11CO2 fixation, photosystem II (PSII) efficiency, and total leaf carbon export of 11C-photoassimilates and their allocation belowground. Furthermore, IAA application increased leaf starch content. On the other hand, SA application increased total root biomass, 11CO2 fixation, PSII efficiency, and leaf carbon export of 11C-photoassimilates, but it decreased leaf starch content. IAA and SA induction patterns were also examined after root-herbivore attack by Diabrotica virgifera to place possible hormone crosstalk into a realistic environmental context. We found that 4 days after infestation, IAA was induced in the midzone and root tip, whereas SA was induced only in the upper proximal zone of damaged roots. We conclude that antagonistic crosstalk exists between IAA and SA which can affect the development of maize plants, particularly through alteration of the root system’s architecture, and we propose that the integration of both signals may shape the plant’s response to environmental stress.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lake sediments from arcto-boreal regions commonly contain abundant Betula pollen. However, palaeoenvironmental interpretations of Betula pollen are often ambiguous because of the lack of reliable morphological features to distinguish among ecologically distinct Betula species in western North America. We measured the grain diameters and pore depths of pollen from three tree-birch species (B. papyrifera, B. kenaica and B. neoalaskana) and two shrub-birch species (B. glandulosa and B. nana), and calculated the ratio of grain diameter to pore depth (D/P ratio). No statistical difference exists in all three parameters between the shrub-birch species or between two of the tree-birch species (B. kenaica and B. papyrifera), and B. neoalaskana is intermediate between the shrub-birch and the other two tree-birch species. However, mean pore depth is significantly larger for the tree species than for the shrub species. In contrast, mean grain diameter cannot distinguish tree and shrub species. Mean D/P ratio separates tree and shrub species less clearly than pore depth, but this ratio can be used for verification. The threshold for distinguishing pollen of tree versus shrub birch lies at 2.55 μm and 8.30 for pore depth and D/P ratio, respectively. We'applied these thresholds to the analysis of Betula pollen in an Alaskan lake-sediment core spanning the past 800 years. Results show that shrub birch increased markedly at the expense of tree birch during the‘Little Ice Age’; this patten is not discernible in the profile of total birch pollen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One key hypothesis in the study of brain size evolution is the expensive tissue hypothesis; the idea that increased investment into the brain should be compensated by decreased investment into other costly organs, for instance the gut. Although the hypothesis is supported by both comparative and experimental evidence, little is known about the potential changes in energetic requirements or digestive traits following such evolutionary shifts in brain and gut size. Organisms may meet the greater metabolic requirements of larger brains despite smaller guts via increased food intake or better digestion. But increased investment in the brain may also hamper somatic growth. To test these hypotheses we here used guppy (Poecilia reticulata) brain size selection lines with a pronounced negative association between brain and gut size and investigated feeding propensity, digestive efficiency (DE), and juvenile growth rate. We did not find any difference in feeding propensity or DE between large- and small-brained individuals. Instead, we found that large-brained females had slower growth during the first 10 weeks after birth. Our study provides experimental support that investment into larger brains at the expense of gut tissue carries costs that are not necessarily compensated by a more efficient digestive system.