951 resultados para Multiple-Time Scale Problem
Resumo:
Carbon fluxes and allocation pattern, and their relationship with the main environmental and physiological parameters, were studied in an apple orchard for one year (2010). I combined three widely used methods: eddy covariance, soil respiration and biometric measurements, and I applied a measurement protocol allowing a cross-check between C fluxes estimated using different methods. I attributed NPP components to standing biomass increment, detritus cycle and lateral export. The influence of environmental and physiological parameters on NEE, GPP and Reco was analyzed with a multiple regression model approach. I found that both NEP and GPP of the apple orchard were of similar magnitude to those of forests growing in similar climate conditions, while large differences occurred in the allocation pattern and in the fate of produced biomass. Apple production accounted for 49% of annual NPP, organic material (leaves, fine root litter, pruned wood and early fruit drop) contributing to detritus cycle was 46%, and only 5% went to standing biomass increment. The carbon use efficiency (CUE), with an annual average of 0.68 ± 0.10, was higher than the previously suggested constant values of 0.47-0.50. Light and leaf area index had the strongest influence on both NEE and GPP. On a diurnal basis, NEE and GPP reached their peak approximately at noon, while they appeared to be limited by high values of VPD and air temperature in the afternoon. The proposed models can be used to explain and simulate current relations between carbon fluxes and environmental parameters at daily and yearly time scale. On average, the annual NEP balanced the carbon annually exported with the harvested apples. These data support the hypothesis of a minimal or null impact of the apple orchard ecosystem on net C emission to the atmosphere.
Resumo:
Assuming that the heat capacity of a body is negligible outside certain inclusions the heat equation degenerates to a parabolic-elliptic interface problem. In this work we aim to detect these interfaces from thermal measurements on the surface of the body. We deduce an equivalent variational formulation for the parabolic-elliptic problem and give a new proof of the unique solvability based on Lions’s projection lemma. For the case that the heat conductivity is higher inside the inclusions, we develop an adaptation of the factorization method to this time-dependent problem. In particular this shows that the locations of the interfaces are uniquely determined by boundary measurements. The method also yields to a numerical algorithm to recover the inclusions and thus the interfaces. We demonstrate how measurement data can be simulated numerically by a coupling of a finite element method with a boundary element method, and finally we present some numerical results for the inverse problem.
Resumo:
The behaviour of a polymer depends strongly on the length- and time scale as well as on the temperature rnat which it is probed. In this work, I describe investigations of polymer surfaces using scanning probe rnmicroscopy with heatable probes. With these probes, surfaces can be heated within seconds down to rnmicroseconds. I introduce experiments for the local and fast determination of glass transition and melting rntemperatures. I developed a method which allows the determination of glass transition and melting rntemperatures on films with thicknesses below 100 nm: A background measurement on the substrate was rnperformed. The resulting curve was subtracted from the measurement on the polymer film. The rndifferential measurement on polystyrene films with thicknesses between 35 nm and 160 nm showed rncharacteristic signals at 95 ± 1 °C, in accordance with the glass transition of polystyrene. Pressing heated rnprobes into polymer films causes plastic deformation. Nanometer sized deformations are currently rninvestigated in novel concepts for high density data storage. A suitable medium for such a storage system rnhas to be easily indentable on one hand, but on the other hand it also has to be very stable towards rnsurface induced wear. For developing such a medium I investigated a new approach: A comparably soft rnmaterial, namely polystyrene, was protected with a thin but very hard layer made of plasma polymerized rnnorbornene. The resulting bilayered media were tested for surface stability and deformability. I showed rnthat the bilayered material combines the deformability of polystyrene with the surface stability of the rnplasma polymer, and that the material therefore is a very good storage medium. In addition we rninvestigated the glass transition temperature of polystyrene at timescales of 10 µs and found it to be rnapprox. 220 °C. The increase of this characteristic temperature of the polymer results from the short time rnat which the polymer was probed and reflects the well-known time-temperature superposition principle. rnHeatable probes were also used for the characterization of silverazide filled nanocapsules. The use of rnheatable probes allowed determining the decomposition temperature of the capsules from few rnnanograms of material. The measured decomposition temperatures ranged from 180 °C to 225 °C, in rnaccordance with literature values. The investigation of small amounts of sample was necessary due to the rnlimited availability of the material. Furthermore, investigating larger amounts of the capsules using rnconventional thermal gravimetric analysis could lead to contamination or even damage of the instrument. rnBesides the analysis of material parameters I used the heatable probes for the local thermal rndecomposition of pentacene precursor material in order to form nanoscale conductive structures. Here, rnthe thickness of the precursor layer was important for complete thermal decomposition. rnAnother aspect of my work was the investigation of redox active polymers - Poly-10-(4-vinylbenzyl)-10H-rnphenothiazine (PVBPT)- for data storage. Data is stored by changing the local conductivity of the material rnby applying a voltage between tip and surface. The generated structures were stable for more than 16 h. It rnwas shown that the presence of water is essential for succesfull patterning.
Resumo:
In this thesis different approaches for the modeling and simulation of the blood protein fibrinogen are presented. The approaches are meant to systematically connect the multiple time and length scales involved in the dynamics of fibrinogen in solution and at inorganic surfaces. The first part of the thesis will cover simulations of fibrinogen on an all atom level. Simulations of the fibrinogen protomer and dimer are performed in explicit solvent to characterize the dynamics of fibrinogen in solution. These simulations reveal an unexpectedly large and fast bending motion that is facilitated by molecular hinges located in the coiled-coil region of fibrinogen. This behavior is characterized by a bending and a dihedral angle and the distribution of these angles is measured. As a consequence of the atomistic detail of the simulations it is possible to illuminate small scale behavior in the binding pockets of fibrinogen that hints at a previously unknown allosteric effect. In a second step atomistic simulations of the fibrinogen protomer are performed at graphite and mica surfaces to investigate initial adsorption stages. These simulations highlight the different adsorption mechanisms at the hydrophobic graphite surface and the charged, hydrophilic mica surface. It is found that the initial adsorption happens in a preferred orientation on mica. Many effects of practical interest involve aggregates of many fibrinogen molecules. To investigate such systems, time and length scales need to be simulated that are not attainable in atomistic simulations. It is therefore necessary to develop lower resolution models of fibrinogen. This is done in the second part of the thesis. First a systematically coarse grained model is derived and parametrized based on the atomistic simulations of the first part. In this model the fibrinogen molecule is represented by 45 beads instead of nearly 31,000 atoms. The intra-molecular interactions of the beads are modeled as a heterogeneous elastic network while inter-molecular interactions are assumed to be a combination of electrostatic and van der Waals interaction. A method is presented that determines the charges assigned to beads by matching the electrostatic potential in the atomistic simulation. Lastly a phenomenological model is developed that represents fibrinogen by five beads connected by rigid rods with two hinges. This model only captures the large scale dynamics in the atomistic simulations but can shed light on experimental observations of fibrinogen conformations at inorganic surfaces.
Resumo:
OBJECTIVE: To determine the effect of glucosamine, chondroitin, or the two in combination on joint pain and on radiological progression of disease in osteoarthritis of the hip or knee. Design Network meta-analysis. Direct comparisons within trials were combined with indirect evidence from other trials by using a Bayesian model that allowed the synthesis of multiple time points. MAIN OUTCOME MEASURE: Pain intensity. Secondary outcome was change in minimal width of joint space. The minimal clinically important difference between preparations and placebo was prespecified at -0.9 cm on a 10 cm visual analogue scale. DATA SOURCES: Electronic databases and conference proceedings from inception to June 2009, expert contact, relevant websites. Eligibility criteria for selecting studies Large scale randomised controlled trials in more than 200 patients with osteoarthritis of the knee or hip that compared glucosamine, chondroitin, or their combination with placebo or head to head. Results 10 trials in 3803 patients were included. On a 10 cm visual analogue scale the overall difference in pain intensity compared with placebo was -0.4 cm (95% credible interval -0.7 to -0.1 cm) for glucosamine, -0.3 cm (-0.7 to 0.0 cm) for chondroitin, and -0.5 cm (-0.9 to 0.0 cm) for the combination. For none of the estimates did the 95% credible intervals cross the boundary of the minimal clinically important difference. Industry independent trials showed smaller effects than commercially funded trials (P=0.02 for interaction). The differences in changes in minimal width of joint space were all minute, with 95% credible intervals overlapping zero. Conclusions Compared with placebo, glucosamine, chondroitin, and their combination do not reduce joint pain or have an impact on narrowing of joint space. Health authorities and health insurers should not cover the costs of these preparations, and new prescriptions to patients who have not received treatment should be discouraged.
Resumo:
Far from being static transmission units, synapses are highly dynamical elements that change over multiple time scales depending on the history of the neural activity of both the pre- and postsynaptic neuron. Moreover, synaptic changes on different time scales interact: long-term plasticity (LTP) can modify the properties of short-term plasticity (STP) in the same synapse. Most existing theories of synaptic plasticity focus on only one of these time scales (either STP or LTP or late-LTP) and the theoretical principles underlying their interactions are thus largely unknown. Here we develop a normative model of synaptic plasticity that combines both STP and LTP and predicts specific patterns for their interactions. Recently, it has been proposed that STP arranges for the local postsynaptic membrane potential at a synapse to behave as an optimal estimator of the presynaptic membrane potential based on the incoming spikes. Here we generalize this approach by considering an optimal estimator of a non-linear function of the membrane potential and the long-term synaptic efficacy—which itself may be subject to change on a slower time scale. We find that an increase in the long-term synaptic efficacy necessitates changes in the dynamics of STP. More precisely, for a realistic non-linear function to be estimated, our model predicts that after the induction of LTP, causing long-term synaptic efficacy to increase, a depressing synapse should become even more depressing. That is, in a protocol using trains of presynaptic stimuli, as the initial EPSP becomes stronger due to LTP, subsequent EPSPs should become weakened and this weakening should be more pronounced with LTP. This form of redistribution of synaptic efficacies agrees well with electrophysiological data on synapses connecting layer 5 pyramidal neurons.
Resumo:
Post-soviet countries are in the process of transformation from a totalitarian order to a democratic one, a transformation which is impossible without a profound shift in people's way of thinking. The group set themselves the task of determining the essence of this shift. Using a multidisciplinary approach, they looked at concrete ways of overcoming the totalitarian mentality and forming that necessary for an open democratic society. They studied the contemporary conceptions of tolerance and critical thinking and looked for new foundations of criticism, especially in hermeneutics. They then sought to substantiate the complementary relation between tolerance and criticism in the democratic way of thinking and to prepare a a syllabus for teaching on the subject in Ukrainian higher education. In a philosophical exploration of tolerance they began with relgious tolerance as its first and most important form. Political and social interests often lay at the foundations of religious intolerance and this implicitly comprised the transition to religious tolerance when conditions changed. Early polytheism was more or less indifferent to dogmatic deviations but monotheism is intolerant of heresies. The damage wrought by the religious wars of the Reformations transformed tolerance into a value. They did not create religious tolerance but forced its recognition as a positive phenomenon. With the weakening of religious institutions in the modern era, the purely political nature of many conflicts became evident and this stimulated the extrapolation of tolerance into secular life. Each historical era has certain acts and operations which may be interpreted as tolerant and these can be classified as to whether or not they are based on the conscious following of the principle of tolerance. This criterion requires the separation of the phenomenon of tolerance from its concept and from tolerance as a value. Only the conjunction of a concept of tolerance with a recognition of its value can transform it into a principle dictating a norm of conscious behaviour. The analysis of the contemporary conception of tolerance focused on the diversity of the concept and concluded that the notions used cannot be combined in the framework of a single more or less simple classification, as the distinctions between them are stimulated by the complexity of the realty considered and the variety of its manifestations. Notions considered in relation to tolerance included pluralism, respect and particular-universal. The rationale of tolerance was also investigated and the group felt that any substantiation of the principle of tolerance must take into account human beings' desire for knowledge. Before respecting or being tolerant of another person different from myself, I should first know where the difference lies, so knowledge is a necessary condition of tolerance.The traditional division of truth into scientific (objective and unique) and religious, moral, political (subjective and so multiple) intensifies the problem of the relationship between truth and tolerance. Science was long seen as a field of "natural" intolerance whereas the validity of tolerance was accepted in other intellectual fields. As tolerance eemrges when there is difference and opposition, it is essentially linked with rivaly and there is a a growing recognition today that unlimited rivalry is neither able to direct the process of development nor to act as creative matter. Social and economic reality has led to rivalry being regulated by the state and a natural requirement of this is to associate tolerance with a special "purified" form of rivalry, an acceptance of the actiivity of different subjects and a specification of the norms of their competition. Tolerance and rivalry should therefore be subordinate to a degree of discipline and the group point out that discipline, including self-discipline, is a regulator of the balance between them. Two problematic aspects of tolerance were identified: why something traditionally supposed to have no positive content has become a human activity today, and whether tolerance has full-scale cultural significance. The resolution of these questions requires a revision of the phenomenon and conception of tolerance to clarify its immanent positive content. This involved an investigation of the contemporary concept of tolerance and of the epistemological foundations of a negative solution of tolerance in Greek thought. An original soution to the problem of the extrapolation of tolerance to scientific knowledge was proposed based on the Duhem-Quine theses and conceptiion of background knowledge. In this way tolerance as a principle of mutual relations between different scientific positions gains an essential epistemological rationale and so an important argument for its own universal status. The group then went on to consider the ontological foundations for a positive solution of this problem, beginning with the work of Poincare and Reichenbach. The next aspect considered was the conceptual foundations of critical thinking, looking at the ideas of Karl Popper and St. Augustine and at the problem of the demarcation line between reasonable criticism and apologetic reasoning. Dogmatic and critical thinking in a political context were also considered, before an investigation of critical thinking's foundations. As logic is essential to critical thinking, the state of this discipline in Ukrainian and Russian higher education was assessed, together with the limits of formal-logical grounds for criticism, the role of informal logical as a basis for critical thinking today, dialectical logic as a foundation for critical thinking and the universality of the contemporary demand for criticism. The search for new foundations of critical thinking covered deconstructivism and critical hermeneutics, including the problem of the author. The relationship between tolerance and criticism was traced from the ancient world, both eastern and Greek, through the transitional community of the Renaissance to the industrial community (Locke and Mill) and the evolution of this relationship today when these are viewed not as moral virtues but as ordinary norms. Tolerance and criticism were discussed as complementary manifestations of human freedom. If the completeness of freedom were accepted it would be impossible to avoid recognition of the natural and legal nature of these manifestations and the group argue that critical tolerance is able to avoid dismissing such negative phenomena as the degradation of taste and manner, pornography, etc. On the basis of their work, the group drew up the syllabus of a course in "Logic with Elements of Critical Thinking, and of a special course on the "Problem of Tolerance".
Resumo:
The number of record-breaking events expected to occur in a strictly stationary time-series depends only on the number of values in the time-series, regardless of distribution. This holds whether the events are record-breaking highs or lows and whether we count from past to present or present to past. However, these symmetries are broken in distinct ways by trends in the mean and variance. We define indices that capture this information and use them to detect weak trends from multiple time-series. Here, we use these methods to answer the following questions: (1) Is there a variability trend among globally distributed surface temperature time-series? We find a significant decreasing variability over the past century for the Global Historical Climatology Network (GHCN). This corresponds to about a 10% change in the standard deviation of inter-annual monthly mean temperature distributions. (2) How are record-breaking high and low surface temperatures in the United States affected by time period? We investigate the United States Historical Climatology Network (USHCN) and find that the ratio of record-breaking highs to lows in 2006 increases as the time-series extend further into the past. When we consider the ratio as it evolves with respect to a fixed start year, we find it is strongly correlated with the ensemble mean. We also compare the ratios for USHCN and GHCN (minus USHCN stations). We find the ratios grow monotonically in the GHCN data set, but not in the USHCN data set. (3) Do we detect either mean or variance trends in annual precipitation within the United States? We find that the total annual and monthly precipitation in the United States (USHCN) has increased over the past century. Evidence for a trend in variance is inconclusive.
Resumo:
OBJECTIVES: Spousal caregivers of Alzheimer's disease patients are at increased risk for cardiovascular disease, possibly via sympathetic response to stressors and subsequent catecholamine surge. Personal mastery (i.e., belief that one can manage life's obstacles) may decrease psychological and physiological response to stressors. This study examines the relationship between mastery and sympathetic arousal in elderly caregivers, as measured by norepinephrine (NE) reactivity to an acute psychological stressor. DESIGN: Cross-sectional. SETTING: Data were collected by a research nurse in each caregiver's home. PARTICIPANTS: Sixty-nine elderly spousal Alzheimer caregivers (mean age: 72.8 years) who were not taking beta-blocking medication. INTERVENTION: After assessment for mastery and objective caregiving stressors, caregivers underwent an experimental speech task designed to induce sympathetic arousal. MEASUREMENTS: Mastery was assessed using Pearlin's Personal Mastery scale and Alzheimer patient functioning was assessed using the Clinical Dementia Rating Scale, Problem Behaviors Scale, and Activities of Daily Living Scale. Plasma NE assays were conducted using pre- and postspeech blood draws. RESULTS: Multiple regression analyses revealed that mastery was significantly and negatively associated with NE reactivity (B = -9.86, t (61) = -2.03, p = 0.046) independent of factors theoretically and empirically linked to NE reactivity. CONCLUSIONS: Caregivers with higher mastery had less NE reactivity to the stressor task. Mastery may exert a protective influence that mitigates the physiological effects of acute stress, and may be an important target for psychosocial interventions in order to reduce sympathetic arousal and cardiovascular stress among dementia caregivers.
Resumo:
[1] In the event of a termination of the Gravity Recovery and Climate Experiment (GRACE) mission before the launch of GRACE Follow-On (due for launch in 2017), high-low satellite-to-satellite tracking (hl-SST) will be the only dedicated observing system with global coverage available to measure the time-variable gravity field (TVG) on a monthly or even shorter time scale. Until recently, hl-SST TVG observations were of poor quality and hardly improved the performance of Satellite Laser Ranging observations. To date, they have been of only very limited usefulness to geophysical or environmental investigations. In this paper, we apply a thorough reprocessing strategy and a dedicated Kalman filter to Challenging Minisatellite Payload (CHAMP) data to demonstrate that it is possible to derive the very long-wavelength TVG features down to spatial scales of approximately 2000 km at the annual frequency and for multi-year trends. The results are validated against GRACE data and surface height changes from long-term GPS ground stations in Greenland. We find that the quality of the CHAMP solutions is sufficient to derive long-term trends and annual amplitudes of mass change over Greenland. We conclude that hl-SST is a viable source of information for TVG and can serve to some extent to bridge a possible gap between the end-of-life of GRACE and the availability of GRACE Follow-On.
Resumo:
Modeling of future water systems at the regional scale is a difficult task due to the complexity of current structures (multiple competing water uses, multiple actors, formal and informal rules) both temporally and spatially. Representing this complexity in the modeling process is a challenge that can be addressed by an interdisciplinary and holistic approach. The assessment of the water system of the Crans-Montana-Sierre area (Switzerland) and its evolution until 2050 were tackled by combining glaciological, hydrogeological, and hydrological measurements and modeling with the evaluation of water use through documentary, statistical and interview-based analyses. Four visions of future regional development were co-produced with a group of stakeholders and were then used as a basis for estimating future water demand. The comparison of the available water resource and the water demand at monthly time scale allowed us to conclude that for the four scenarios socioeconomic factors will impact on the future water systems more than climatic factors. An analysis of the sustainability of the current and future water systems based on four visions of regional development allowed us to identify those scenarios that will be more sustainable and that should be adopted by the decision-makers. The results were then presented to the stakeholders through five key messages. The challenges of communicating the results in such a way with stakeholders are discussed at the end of the article.
Resumo:
Karst aquifers are known for their wide distribution of water transfer velocities. From this observation, a multiple geochemical tracer approach seems to be particularly well suited to provide a significant assessment of groundwater flows, but the choice of adapted tracers is essential. In this study, several common tracers in karst aquifers such as physicochemical parameters, major ions, stable isotopes, and d13C to more specific tracers such as dating tracers – 14C, 3H, 3H–3He, CFC-12, SF6 and 85Kr, and 39Ar – were used, in a fractured karstic carbonated aquifer located in Burgundy (France). The information carried by each tracer and the best sampling strategy are compared on the basis of geochemical monitoring done during several recharge events and over longer time periods (months to years). This study’s results demonstrate that at the seasonal and recharge event time scale, the variability of concentrations is low for most tracers due to the broad spectrum of groundwater mixings. The tracers used traditionally for the study of karst aquifers, i.e., physicochemical parameters and major ions, efficiently describe hydrological processes such as the direct and differed recharge, but require being monitored at short time steps during recharge events to be maximized. From stable isotopes, tritium, and Cl� contents, the proportion of the fast direct recharge by the largest porosity was estimated using a binary mixing model. The use of tracers such as CFC-12, SF6, and 85Kr in karst aquifers provides additional information, notably an estimation of apparent age, but they require good preliminary knowledge of the karst system to interpret the results suitably. The CFC-12 and SF6 methods efficiently determine the apparent age of baseflow, but it is preferable to sample the groundwater during the recharge event. Furthermore, these methods are based on different assumptions such as regional enrichment in atmospheric SF6, excess air, and flow models among others. 85Kr and 39Ar concentrations can potentially provide a more direct estimation of groundwater residence time. Conversely, the 3H–3He method is inefficient in the karst aquifer for dating due to 3He degassing.
Resumo:
BACKGROUND Knee osteoarthritis is a leading cause of chronic pain, disability, and decreased quality of life. Despite the long-standing use of intra-articular corticosteroids, there is an ongoing debate about their benefits and safety. This is an update of a Cochrane review first published in 2005. OBJECTIVES To determine the benefits and harms of intra-articular corticosteroids compared with sham or no intervention in people with knee osteoarthritis in terms of pain, physical function, quality of life, and safety. SEARCH METHODS We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, and EMBASE (from inception to 3 February 2015), checked trial registers, conference proceedings, reference lists, and contacted authors. SELECTION CRITERIA We included randomised or quasi-randomised controlled trials that compared intra-articular corticosteroids with sham injection or no treatment in people with knee osteoarthritis. We applied no language restrictions. DATA COLLECTION AND ANALYSIS We calculated standardised mean differences (SMDs) and 95% confidence intervals (CI) for pain, function, quality of life, joint space narrowing, and risk ratios (RRs) for safety outcomes. We combined trials using an inverse-variance random-effects meta-analysis. MAIN RESULTS We identified 27 trials (13 new studies) with 1767 participants in this update. We graded the quality of the evidence as 'low' for all outcomes because treatment effect estimates were inconsistent with great variation across trials, pooled estimates were imprecise and did not rule out relevant or irrelevant clinical effects, and because most trials had a high or unclear risk of bias. Intra-articular corticosteroids appeared to be more beneficial in pain reduction than control interventions (SMD -0.40, 95% CI -0.58 to -0.22), which corresponds to a difference in pain scores of 1.0 cm on a 10-cm visual analogue scale between corticosteroids and sham injection and translates into a number needed to treat for an additional beneficial outcome (NNTB) of 8 (95% CI 6 to 13). An I(2) statistic of 68% indicated considerable between-trial heterogeneity. A visual inspection of the funnel plot suggested some asymmetry (asymmetry coefficient -1.21, 95%CI -3.58 to 1.17). When stratifying results according to length of follow-up, benefits were moderate at 1 to 2 weeks after end of treatment (SMD -0.48, 95% CI -0.70 to -0.27), small to moderate at 4 to 6 weeks (SMD -0.41, 95% CI -0.61 to -0.21), small at 13 weeks (SMD -0.22, 95% CI -0.44 to 0.00), and no evidence of an effect at 26 weeks (SMD -0.07, 95% CI -0.25 to 0.11). An I(2) statistic of ≥ 63% indicated a moderate to large degree of between-trial heterogeneity up to 13 weeks after end of treatment (P for heterogeneity≤0.001), and an I(2) of 0% indicated low heterogeneity at 26 weeks (P=0.43). There was evidence of lower treatment effects in trials that randomised on average at least 50 participants per group (P=0.05) or at least 100 participants per group (P=0.013), in trials that used concomittant viscosupplementation (P=0.08), and in trials that used concomitant joint lavage (P≤0.001).Corticosteroids appeared to be more effective in function improvement than control interventions (SMD -0.33, 95% CI -0.56 to -0.09), which corresponds to a difference in functions scores of -0.7 units on standardised Western Ontario and McMaster Universities Arthritis Index (WOMAC) disability scale ranging from 0 to 10 and translates into a NNTB of 10 (95% CI 7 to 33). An I(2) statistic of 69% indicated a moderate to large degree of between-trial heterogeneity. A visual inspection of the funnel plot suggested asymmetry (asymmetry coefficient -4.07, 95% CI -8.08 to -0.05). When stratifying results according to length of follow-up, benefits were small to moderate at 1 to 2 weeks after end of treatment (SMD -0.43, 95% CI -0.72 to -0.14), small to moderate at 4 to 6 weeks (SMD -0.36, 95% CI -0.63 to -0.09), and no evidence of an effect at 13 weeks (SMD -0.13, 95% CI -0.37 to 0.10) or at 26 weeks (SMD 0.06, 95% CI -0.16 to 0.28). An I(2) statistic of ≥ 62% indicated a moderate to large degree of between-trial heterogeneity up to 13 weeks after end of treatment (P for heterogeneity≤0.004), and an I(2) of 0% indicated low heterogeneity at 26 weeks (P=0.52). We found evidence of lower treatment effects in trials that randomised on average at least 50 participants per group (P=0.023), in unpublished trials (P=0.023), in trials that used non-intervention controls (P=0.031), and in trials that used concomitant viscosupplementation (P=0.06).Participants on corticosteroids were 11% less likely to experience adverse events, but confidence intervals included the null effect (RR 0.89, 95% CI 0.64 to 1.23, I(2)=0%). Participants on corticosteroids were 67% less likely to withdraw because of adverse events, but confidence intervals were wide and included the null effect (RR 0.33, 95% CI 0.05 to 2.07, I(2)=0%). Participants on corticosteroids were 27% less likely to experience any serious adverse event, but confidence intervals were wide and included the null effect (RR 0.63, 95% CI 0.15 to 2.67, I(2)=0%).We found no evidence of an effect of corticosteroids on quality of life compared to control (SMD -0.01, 95% CI -0.30 to 0.28, I(2)=0%). There was also no evidence of an effect of corticosteroids on joint space narrowing compared to control interventions (SMD -0.02, 95% CI -0.49 to 0.46). AUTHORS' CONCLUSIONS Whether there are clinically important benefits of intra-articular corticosteroids after one to six weeks remains unclear in view of the overall quality of the evidence, considerable heterogeneity between trials, and evidence of small-study effects. A single trial included in this review described adequate measures to minimise biases and did not find any benefit of intra-articular corticosteroids.In this update of the systematic review and meta-analysis, we found most of the identified trials that compared intra-articular corticosteroids with sham or non-intervention control small and hampered by low methodological quality. An analysis of multiple time points suggested that effects decrease over time, and our analysis provided no evidence that an effect remains six months after a corticosteroid injection.
Resumo:
Through dedicated measurements in the optical regime we demonstrate that ptychography can be applied to reconstruct complex-valued object functions that vary with time from a sequence of spectral measurements. A probe pulse of approximately 1 ps duration, time delayed in increments of 0.25 ps, is shown to recover dynamics on a ten times faster time scale with an experimental limit of approximately 5 fs.
Resumo:
The first complete cyclic sedimentary successions for the early Paleogene from drilling multiple holes have been retrieved during two ODP expeditions: Leg 198 (Shatsky Rise, NW Pacific Ocean) and Leg 208 (Walvis Ridge, SE Atlantic Ocean). These new records allow us to construct a comprehensive astronomically calibrated stratigraphic framework with an unprecedented accuracy for both the Atlantic and the Pacific Oceans covering the entire Paleocene epoch based on the identification of the stable long-eccentricity cycle (405-kyr). High resolution X-ray fluorescence (XRF) core scanner and non-destructive core logging data from Sites 1209 through1211 (Leg 198) and Sites 1262, 1267 (Leg 208) are the basis for such a robust chronostratigraphy. Former investigated marine (ODP Sites 1001 and 1051) and land-based (e.g., Zumaia) sections have been integrated as well. The high-fidelity chronology is the prerequisite for deciphering mechanisms in relation to prominent transient climatic events as well as completely new insights into Greenhouse climate variability in the early Paleogene. We demonstrate that the Paleocene epoch covers 24 long eccentricity cycles. We also show that no definite absolute age datums for the K/Pg boundary or the Paleocene - Eocene Thermal Maximum (PETM) can be provided by now, because of still existing uncertainties in orbital solutions and radiometric dating. However, we provide two options for tuning of the Paleocene which are only offset by 405-kyr. Our orbitally calibrated integrated Leg 208 magnetostratigraphy is used to revise the Geomagnetic Polarity Time Scale (GPTS) for Chron C29 to C25. We established a high-resolution calcareous nannofossil biostratigraphy for the South Atlantic which allows a much more detailed relative scaling of stages with biozones. The re-evaluation of the South Atlantic spreading rate model features higher frequent oscillations in spreading rates for magnetochron C28r, C27n, and C26n.