876 resultados para play-based
Resumo:
Forest fires play a key role in the global carbon cycle and thus, can affect regional and global climate. Although fires in extended areas of Russian boreal forests have a considerable influence on atmospheric greenhouse gas and soot concentrations, estimates of their impact on climate are hampered by a lack of data on the history of forest fires. Especially regions with strong continental climate are of high importance due to an intensified development of wildfires. In this study we reconstruct the fire history of Southern Siberia during the past 750 years using ice-core based nitrate, potassium, and charcoal concentration records from Belukha glacier in the continental Siberian Altai. A period of exceptionally high forest-fire activity was observed between AD 1600 and 1680, following an extremely dry period AD 1540-1600. Ice-core pollen data suggest distinct forest diebacks and the expansion of steppe in response to dry climatic conditions. Coherence with a paleoenvironmental record from the 200 km distant Siberian lake Teletskoye shows that the vegetational shift AD 1540-1680, the increase in fire activity AD 1600-1680, and the subsequent recovery of forests AD 1700 were of regional significance. Dead biomass accumulation in response to drought and high temperatures around AD 1600 probably triggered maximum forest-fire activity AD 1600-1680. The extreme dry period in the 16th century was also observed at other sites in Central Asia and is possibly associated with a persistent positive mode of the Pacific Decadal Oscillation (PDO). No significant increase in biomass burning occurred in the Altai region during the last 300 years, despite strongly increasing temperatures and human activities. Our results imply that precipitation changes controlled fire-regime and vegetation shifts in the Altai region during the past 750 years. We conclude that high sensitivity of ecosystems to occasional decadal-scale drought events may trigger unprecedented environmental reorganizations under global-warming conditions.
Resumo:
INTRODUCTION: Cognitive complaints, such as poor concentration and memory deficits, are frequent after whiplash injury and play an important role in disability. The origin of these complaints is discussed controversially. Some authors postulate brain lesions as a consequence of whiplash injuries. Potential diffuse axonal injury (DAI) with subsequent atrophy of the brain and ventricular expansion is of particular interest as focal brain lesions have not been documented so far in whiplash injury. OBJECTIVE: To investigate whether traumatic brain injury can be identified using a magnetic resonance (MR)-based quantitative analysis of normalized ventricle-brain ratios (VBR) in chronic whiplash patients with subjective cognitive impairment that cannot be objectively confirmed by neuropsychological testing. MATERIALS AND METHODS: MR examination was performed in 21 patients with whiplash injury and symptom persistence for 9 months on average and in 18 matched healthy controls. Conventional MR imaging (MRI) was used to assess the volumes of grey and white matter and of ventricles. The normalized VBR was calculated. RESULTS: The values of normalized VBR did not differ in whiplash patients when compared with that in healthy controls (F = 0.216, P = 0.645). CONCLUSIONS: This study does not support loss of brain tissue following whiplash injury as measured by VBR. On this basis, traumatic brain injury with subsequent DAI does not seem to be the underlying mechanism for persistent concentration and memory deficits that are subjectively reported but not objectively verifiable as neuropsychological deficits.
Resumo:
BACKGROUND: In patients with coronary artery disease (CAD), a well grown collateral circulation has been shown to be important. The aim of this prospective study using peripheral blood monocytes was to identify marker genes for an extensively grown coronary collateral circulation. METHODS: Collateral flow index (CFI) was obtained invasively by angioplasty pressure sensor guidewire in 160 individuals (110 patients with CAD, and 50 individuals without CAD). RNA was extracted from monocytes followed by microarray-based gene-expression analysis. 76 selected genes were analysed by real-time polymerase chain reaction (PCR). A receiver operating characteristics analysis based on differential gene expression was then performed to separate individuals with poor (CFI<0.21) and well-developed collaterals (CFI>or=0.21) Thereafter, the influence of the chemokine MCP-1 on the expression of six selected genes was tested by PCR. RESULTS: The expression of 203 genes significantly correlated with CFI (p = 0.000002-0.00267) in patients with CAD and 56 genes in individuals without CAD (p = 00079-0.0430). Biological pathway analysis revealed 76 of those genes belonging to four different pathways: angiogenesis, integrin-, platelet-derived growth factor-, and transforming growth factor beta-signalling. Three genes in each subgroup differentiated with high specificity among individuals with low and high CFI (>or=0.21). Two out of these genes showed pronounced differential expression between the two groups after cell stimulation with MCP-1. CONCLUSIONS: Genetic factors play a role in the formation and the preformation of the coronary collateral circulation. Gene expression analysis in peripheral blood monocytes can be used for non-invasive differentiation between individuals with poorly and with well grown collaterals. MCP-1 can influence the arteriogenic potential of monocytes.
Resumo:
Landscape structure and heterogeneity play a potentially important, but little understood role in predator-prey interactions and behaviourally-mediated habitat selection. For example, habitat complexity may either reduce or enhance the efficiency of a predator's efforts to search, track, capture, kill and consume prey. For prey, structural heterogeneity may affect predator detection, avoidance and defense, escape tactics, and the ability to exploit refuges. This study, investigates whether and how vegetation and topographic structure influence the spatial patterns and distribution of moose (Alces alces) mortality due to predation and malnutrition at the local and landscape levels on Isle Royale National Park. 230 locations where wolves (Canis lupus) killed moose during the winters between 2002 and 2010, and 182 moose starvation death sites for the period 1996-2010, were selected from the extensive Isle Royale Wolf-Moose Project carcass database. A variety of LiDAR-derived metrics were generated and used in an algorithm model (Random Forest) to identify, characterize, and classify three-dimensional variables significant to each of the mortality classes. Furthermore, spatial models to predict and assess the likelihood at the landscape scale of moose mortality were developed. This research found that the patterns of moose mortality by predation and malnutrition across the landscape are non-random, have a high degree of spatial variability, and that both mechanisms operate in contexts of comparable physiographic and vegetation structure. Wolf winter hunting locations on Isle Royale are more likely to be a result of its prey habitat selection, although they seem to prioritize the overall areas with higher moose density in the winter. Furthermore, the findings suggest that the distribution of moose mortality by predation is habitat-specific to moose, and not to wolves. In addition, moose sex, age, and health condition also affect mortality site selection, as revealed by subtle differences between sites in vegetation heights, vegetation density, and topography. Vegetation density in particular appears to differentiate mortality locations for distinct classes of moose. The results also emphasize the significance of fine-scale landscape and habitat features when addressing predator-prey interactions. These finer scale findings would be easily missed if analyses were limited to the broader landscape scale alone.
Resumo:
Background Cardiac arrests are handled by teams rather than by individual health-care workers. Recent investigations demonstrate that adherence to CPR guidelines can be less than optimal, that deviations from treatment algorithms are associated with lower survival rates, and that deficits in performance are associated with shortcomings in the process of team-building. The aim of this study was to explore and quantify the effects of ad-hoc team-building on the adherence to the algorithms of CPR among two types of physicians that play an important role as first responders during CPR: general practitioners and hospital physicians. Methods To unmask team-building this prospective randomised study compared the performance of preformed teams, i.e. teams that had undergone their process of team-building prior to the onset of a cardiac arrest, with that of teams that had to form ad-hoc during the cardiac arrest. 50 teams consisting of three general practitioners each and 50 teams consisting of three hospital physicians each, were randomised to two different versions of a simulated witnessed cardiac arrest: the arrest occurred either in the presence of only one physician while the remaining two physicians were summoned to help ("ad-hoc"), or it occurred in the presence of all three physicians ("preformed"). All scenarios were videotaped and performance was analysed post-hoc by two independent observers. Results Compared to preformed teams, ad-hoc forming teams had less hands-on time during the first 180 seconds of the arrest (93 ± 37 vs. 124 ± 33 sec, P < 0.0001), delayed their first defibrillation (67 ± 42 vs. 107 ± 46 sec, P < 0.0001), and made less leadership statements (15 ± 5 vs. 21 ± 6, P < 0.0001). Conclusion Hands-on time and time to defibrillation, two performance markers of CPR with a proven relevance for medical outcome, are negatively affected by shortcomings in the process of ad-hoc team-building and particularly deficits in leadership. Team-building has thus to be regarded as an additional task imposed on teams forming ad-hoc during CPR. All physicians should be aware that early structuring of the own team is a prerequisite for timely and effective execution of CPR.
Resumo:
National and international studies demonstrate that the number of teenagers using the inter-net increases. But even though they actually do have access from different places to the in-formation and communication pool of the internet, there is evidence that the ways in which teenagers use the net - regarding the scope and frequency in which services are used as well as the preferences for different contents of these services - differ significantly in relation to socio-economic status, education, and gender. The results of the regarding empirical studies may be summarised as such: teenager with low (formal ) education especially use internet services embracing 'entertainment, play and fun' while higher educated teenagers (also) prefer intellectually more demanding and particularly services supplying a greater variety of communicative and informative activities. More generally, pedagogical and sociological studies investigating "digital divide" in a dif-ferentiated and sophisticated way - i.e. not only in terms of differences between those who do have access to the Internet and those who do not - suggest that the internet is no space beyond 'social reality' (e.g. DiMaggio & Hargittai 2001, 2003; Vogelgesang, 2002; Welling, 2003). Different modes of utilisation, that structure the internet as a social space are primarily a specific contextualisation of the latter - and thus, the opportunities and constraints in virtual world of the internet are not less than those in the 'real world' related to unequal distribu-tions of material, social and cultural resources as well as social embeddings of the actors involved. This fact of inequality is also true regarding the outcomes of using the internet. Empirical and theoretical results concerning forms and processes of networking and commu-nity building - i.e. sociability in the internet, as well as the social embeddings of the users which are mediated through the internet - suggest that net based communication and infor-mation processes may entail the resource 'social support'. Thus, with reference to social work and the task of compensating the reproduction of social disadvantages - whether they are medial or not - the ways in which teenagers get access to and utilize net based social sup-port are to be analysed.
Resumo:
Background: Accelerometry has been established as an objective method that can be used to assess physical activity behavior in large groups. The purpose of the current study was to provide a validated equation to translate accelerometer counts of the triaxial GT3X into energy expenditure in young children. Methods: Thirty-two children aged 5–9 years performed locomotor and play activities that are typical for their age group. Children wore a GT3X accelerometer and their energy expenditure was measured with indirect calorimetry. Twenty-one children were randomly selected to serve as development group. A cubic 2-regression model involving separate equations for locomotor and play activities was developed on the basis of model fit. It was then validated using data of the remaining children and compared with a linear 2-regression model and a linear 1-regression model. Results: All 3 regression models produced strong correlations between predicted and measured MET values. Agreement was acceptable for the cubic model and good for both linear regression approaches. Conclusions: The current linear 1-regression model provides valid estimates of energy expenditure for ActiGraph GT3X data for 5- to 9-year-old children and shows equal or better predictive validity than a cubic or a linear 2-regression model.
Resumo:
The ActiGraph accelerometer is commonly used to measure physical activity in children. Count cut-off points are needed when using accelerometer data to determine the time a person spent in moderate or vigorous physical activity. For the GT3X accelerometer no cut-off points for young children have been published yet. The aim of the current study was thus to develop and validate count cut-off points for young children. Thirty-two children aged 5 to 9 years performed four locomotor and four play activities. Activity classification into the light-, moderate- or vigorous-intensity category was based on energy expenditure measurements with indirect calorimetry. Vertical axis as well as vector magnitude cut-off points were determined through receiver operating characteristic curve analyses with the data of two thirds of the study group and validated with the data of the remaining third. The vertical axis cut-off points were 133 counts per 5 sec for moderate to vigorous physical activity (MVPA), 193 counts for vigorous activity (VPA) corresponding to a metabolic threshold of 5 MET and 233 for VPA corresponding to 6 MET. The vector magnitude cut-off points were 246 counts per 5 sec for MVPA, 316 counts for VPA - 5 MET and 381 counts for VPA - 6 MET. When validated, the current cut-off points generally showed high recognition rates for each category, high sensitivity and specificity values and moderate agreement in terms of the Kappa statistic. These results were similar for vertical axis and vector magnitude cut-off points. The current cut-off points adequately reflect MVPA and VPA in young children. Cut-off points based on vector magnitude counts did not appear to reflect the intensity categories better than cut-off points based on vertical axis counts alone.
Resumo:
The Internet of Things (IoT) is attracting considerable attention from the universities, industries, citizens and governments for applications, such as healthcare, environmental monitoring and smart buildings. IoT enables network connectivity between smart devices at all times, everywhere, and about everything. In this context, Wireless Sensor Networks (WSNs) play an important role in increasing the ubiquity of networks with smart devices that are low-cost and easy to deploy. However, sensor nodes are restricted in terms of energy, processing and memory. Additionally, low-power radios are very sensitive to noise, interference and multipath distortions. In this context, this article proposes a routing protocol based on Routing by Energy and Link quality (REL) for IoT applications. To increase reliability and energy-efficiency, REL selects routes on the basis of a proposed end-to-end link quality estimator mechanism, residual energy and hop count. Furthermore, REL proposes an event-driven mechanism to provide load balancing and avoid the premature energy depletion of nodes/networks. Performance evaluations were carried out using simulation and testbed experiments to show the impact and benefits of REL in small and large-scale networks. The results show that REL increases the network lifetime and services availability, as well as the quality of service of IoT applications. It also provides an even distribution of scarce network resources and reduces the packet loss rate, compared with the performance of well-known protocols.
Resumo:
BACKGROUND Prophylactic measures are key components of dairy herd mastitis control programs, but some are only relevant in specific housing systems. To assess the association between management practices and mastitis incidence, data collected in 2011 by a survey among 979 randomly selected Swiss dairy farms, and information from the regular test day recordings from 680 of these farms was analyzed. RESULTS The median incidence of farmer-reported clinical mastitis (ICM) was 11.6 (mean 14.7) cases per 100 cows per year. The median annual proportion of milk samples with a composite somatic cell count (PSCC) above 200,000 cells/ml was 16.1 (mean 17.3) %. A multivariable negative binomial regression model was fitted for each of the mastitis indicators for farms with tie-stall and free-stall housing systems separately to study the effect of other (than housing system) management practices on the ICM and PSCC events (above 200,000 cells/ml). The results differed substantially by housing system and outcome. In tie-stall systems, clinical mastitis incidence was mainly affected by region (mountainous production zone; incidence rate ratio (IRR) = 0.73), the dairy herd replacement system (1.27) and farmers age (0.81). The proportion of high SCC was mainly associated with dry cow udder controls (IRR = 0.67), clean bedding material at calving (IRR = 1.72), using total merit values to select bulls (IRR = 1.57) and body condition scoring (IRR = 0.74). In free-stall systems, the IRR for clinical mastitis was mainly associated with stall climate/temperature (IRR = 1.65), comfort mats as resting surface (IRR = 0.75) and when no feed analysis was carried out (IRR = 1.18). The proportion of high SSC was only associated with hand and arm cleaning after calving (IRR = 0.81) and beef producing value to select bulls (IRR = 0.66). CONCLUSIONS There were substantial differences in identified risk factors in the four models. Some of the factors were in agreement with the reported literature while others were not. This highlights the multifactorial nature of the disease and the differences in the risks for both mastitis manifestations. Attempting to understand these multifactorial associations for mastitis within larger management groups continues to play an important role in mastitis control programs.
Resumo:
Heat shock protein 90 (HSP90) is an abundant molecular chaperone that regulates the functional stability of client oncoproteins, such as STAT3, Raf-1 and Akt, which play a role in the survival of malignant cells. The chaperone function of HSP90 is driven by the binding and hydrolysis of ATP. The geldanamycin analog, 17-AAG, binds to the ATP pocket of HSP90 leading to the degradation of client proteins. However, treatment with 17-AAG results in the elevation of the levels of antiapoptotic proteins HSP70 and HSP27, which may lead to cell death resistance. The increase in HSP70 and HSP27 protein levels is due to the activation of the transcription factor HSF-1 binding to the promoter region of HSP70 and HSP27 genes. HSF-1 binding subsequently promotes HSP70 and HSP27 gene expression. Based on this, I hypothesized that inhibition of transcription/translation of HSP or client proteins would enhance 17-AAG-mediated cytotoxicity. Multiple myeloma (MM) cell lines MM.1S, RPMI-8226, and U266 were used as a model. To test this hypothesis, two different strategies were used. For the first approach, a transcription inhibitor was combined with 17-AAG. The established transcription inhibitor Actinomycin D (Act D), used in the clinic, intercalates into DNA and blocks RNA elongation. Stress inducible (HSP90á, HSP70 and HSP27) and constitutive (HSP90â and HSC70) mRNA and protein levels were measured using real time RT-PCR and immunoblot assays. Treatment with 0.5 µM 17-AAG for 8 hours resulted in the induction of all HSP transcript and protein levels in the MM cell lines. This induction of HSP mRNA levels was diminished by 0.05 µg/mL Act D for 12 hours in the combination treatment, except for HSP70. At the protein level, Act D abrogated the 17-AAG-mediated induction of all HSP expression levels, including HSP70. Cytotoxic evaluation (Annexin V/7-AAD assay) of Act D in combination with 17-AAG suggested additive or more than additive interactions. For the second strategy, an agent that affected bioenergy production in addition to targeting transcription and translation was used. Since ATP is necessary for the proper folding and maturation of client proteins by HSP90, ATP depletion should lead to a decrease in client protein levels. The transcription and translation inhibitor 8-Chloro-Adenosine (8-Cl-Ado), currently in clinical trials, is metabolized into its cytotoxic form 8-Cl-ATP causing a parallel decrease of the cellular ATP pool. Treatment with 0.5 µM 17-AAG for 8 hours resulted in the induction of all HSP transcript and protein levels in the three MM cell lines evaluated. In the combination treatment, 10 µM 8-Cl-Ado for 20 hours did not abrogate the induction of HSP mRNA or protein levels. Since cellular bioenergy is necessary for the stabilization of oncoproteins by HSP90, immunoblot assays analyzing for expression levels of client proteins such as STAT3, Raf-1, and Akt were performed. Immunoblot assays detecting for the phosphorylation status of the translation repressor 4E-BP1, whose activity is modulated by upstream kinases sensitive to changes in ATP levels, were also performed. The hypophosphorylated state of 4E-BP1 leads to translation repression. Data indicated that treatment with 17-AAG alone resulted in a minor (<10%) change in STAT3, Raf-1, and Akt protein levels, while no change was observed for 4E-BP1. The combination treatment resulted in more than 50% decrease of the client protein levels and hypophosphorylation of 4E-BP1 in all MM cell lines. Treatment with 8-Cl-Ado alone resulted in less than 30% decrease in client protein levels as well as a decrease in 4E-BP1 phosphorylation. Cytotoxic evaluation of 8-Cl-Ado in combination with 17-AAG resulted in more than additive cytotoxicity when drugs were combined in a sequential manner. In summary, these data suggest that the mechanism-based combination of agents that target transcription, translation, or decrease cellular bioenergy with 17-AAG results in increase cytotoxicity when compared to the single agents. Such combination strategies may be applied in the clinic since these drugs are established chemotherapeutic agents or currently in clinical trials.
Resumo:
Many plant species are able to tolerate severe disturbance leading to removal of a substantial portion of the body by resprouting from intact or fragmented organs. Resprouting enables plants to compensate for biomass loss and complete their life cycles. The degree of disturbance tolerance, and hence the ecological advantage of damage tolerance (in contrast to alternative strategies), has been reported to be affected by environmental productivity. In our study, we examined the influence of soil nutrients (as an indicator of environmental productivity) on biomass and stored carbohydrate compensation after removal of aboveground parts in the perennial resprouter Plantago lanceolata. Specifically, we tested and compared the effects of nutrient availability on biomass and carbon storage in damaged and undamaged individuals. Damaged plants of P. lanceolata compensated neither in terms of biomass nor overall carbon storage. However, whereas in the nutrient-poor environment, root total non-structural carbohydrate concentrations (TNC) were similar for damaged and undamaged plants, in the nutrient-rich environment, damaged plants had remarkably higher TNC than undamaged plants. Based on TNC allocation patterns, we conclude that tolerance to disturbance is promoted in more productive environments, where higher photosynthetic efficiency allows for successful replenishment of carbohydrates. Although plants under nutrient-rich conditions did not compensate in terms of biomass or seed production, they entered winter with higher content of carbohydrates, which might result in better performance in the next growing season. This otherwise overlooked compensation mechanism might be responsible for inconsistent results reported from other studies.
Resumo:
Aromatic pi–pi stacking interactions are ubiquitous in nature, medicinal chemistry and materials sciences. They play a crucial role in the stacking of nucleobases, thus stabilising the DNA double helix. The following paper describes a series of chimeric DNA–polycyclic aromatic hydrocarbon (PAH) hybrids. The PAH building blocks are electron-rich pyrene and electron-poor perylenediimide (PDI), and were incorporated into complementary DNA strands. The hybrids contain different numbers of pyrene–PDI interactions that were found to directly influence duplex stability. As the pyrene–PDI ratio approaches 1:1, the stability of the duplexes increases with an average value of 7.5 °C per pyrene–PDI supramolecular interaction indicating the importance of electrostatic complementarity for aromatic pi–pi stacking interactions.
Resumo:
We present quantitative reconstructions of regional vegetation cover in north-western Europe, western Europe north of the Alps, and eastern Europe for five time windows in the Holocene around 6k, 3k, 0.5k, 0.2k, and 0.05k calendar years before present (bp)] at a 1 degrees x1 degrees spatial scale with the objective of producing vegetation descriptions suitable for climate modelling. The REVEALS model was applied on 636 pollen records from lakes and bogs to reconstruct the past cover of 25 plant taxa grouped into 10 plant-functional types and three land-cover types evergreen trees, summer-green (deciduous) trees, and open land]. The model corrects for some of the biases in pollen percentages by using pollen productivity estimates and fall speeds of pollen, and by applying simple but robust models of pollen dispersal and deposition. The emerging patterns of tree migration and deforestation between 6k bp and modern time in the REVEALS estimates agree with our general understanding of the vegetation history of Europe based on pollen percentages. However, the degree of anthropogenic deforestation (i.e. cover of cultivated and grazing land) at 3k, 0.5k, and 0.2k bp is significantly higher than deduced from pollen percentages. This is also the case at 6k in some parts of Europe, in particular Britain and Ireland. Furthermore, the relationship between summer-green and evergreen trees, and between individual tree taxa, differs significantly when expressed as pollen percentages or as REVEALS estimates of tree cover. For instance, when Pinus is dominant over Picea as pollen percentages, Picea is dominant over Pinus as REVEALS estimates. These differences play a major role in the reconstruction of European landscapes and for the study of land cover-climate interactions, biodiversity and human resources.
Resumo:
Software dependencies play a vital role in programme comprehension, change impact analysis and other software maintenance activities. Traditionally, these activities are supported by source code analysis; however, the source code is sometimes inaccessible or difficult to analyse, as in hybrid systems composed of source code in multiple languages using various paradigms (e.g. object-oriented programming and relational databases). Moreover, not all stakeholders have adequate knowledge to perform such analyses. For example, non-technical domain experts and consultants raise most maintenance requests; however, they cannot predict the cost and impact of the requested changes without the support of the developers. We propose a novel approach to predicting software dependencies by exploiting the coupling present in domain-level information. Our approach is independent of the software implementation; hence, it can be used to approximate architectural dependencies without access to the source code or the database. As such, it can be applied to hybrid systems with heterogeneous source code or legacy systems with missing source code. In addition, this approach is based solely on information visible and understandable to domain users; therefore, it can be efficiently used by domain experts without the support of software developers. We evaluate our approach with a case study on a large-scale enterprise system, in which we demonstrate how up to 65 of the source code dependencies and 77% of the database dependencies are predicted solely based on domain information.