54 resultados para Jernström Offset


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In addition to plasma metabolites and hormones participating as humoral signals in the control of feed intake, oxidative metabolic processes in peripheral organs also generate signals to terminate feeding. Although the degree of oxidation over longer periods is relatively constant, recent work suggests that the periprandial pattern of fuel oxidation is involved in regulating feeding behavior in the bovine. However, the association between periprandial oxidative metabolism and feed intake of dairy cows has not yet been studied. Therefore, the aim of this study was to elucidate possible associations existing between single feed intake events and whole-body net fat and net carbohydrate oxidation as well as their relation to plasma metabolite concentrations. To this end, 4 late-lactating cows equipped with jugular catheters were kept in respiratory chambers with continuous and simultaneous recording of gas exchange and feed intake. Animals were fed ad libitum (AL) for 24h and then feed restricted (RE) to 50% of the previous AL intake for a further 24h. Blood samples were collected hourly to analyze β-hydroxybutyrate (BHBA), glucose, nonesterified fatty acids (NEFA), insulin, and acylated ghrelin concentrations. Cross-correlation analysis revealed an offset ranging between 30 and 42 min between the maximum of a feed intake event and the lowest level of postprandial net fat oxidation (FOX(net)) and the maximum level of postprandial net carbohydrate oxidation (COX(net)), respectively. During the AL period, FOX(net) did not increase above -0.2g/min, whereas COX(net) did not decrease below 6g/min before the start of the next feed intake event. A strong inverse cross-correlation was obtained between COX(net) and plasma glucose concentration. Direct cross-correlations were observed between COXnet and insulin, between heat production and BHBA, between insulin and glucose, and between BHBA and ghrelin. We found no cross-correlation between FOX(net) and NEFA. During RE, FOX(net) increased with an exponential slope, exceeded the threshold of -0.2g/min as indicated by increasing plasma NEFA concentrations, and approached a maximum rate of 0.1g/min, whereas COX(net) decayed in an exponential manner, approaching a minimal COX(net) rate of about 2.5 g/min in all cows. Our novel findings suggest that, in late-lactating cows, postprandial increases in metabolic oxidative processes seem to signal suppression of feed intake, whereas preprandially an accelerated FOX(net) rate and a decelerated COX(net) rate initiate feed intake.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CONTEXT: E-learning resources, such as virtual patients (VPs), can be more effective when they are integrated in the curriculum. To gain insights that can inform guidelines for the curricular integration of VPs, we explored students' perceptions of scenarios with integrated and non-integrated VPs aimed at promoting clinical reasoning skills. METHODS: During their paediatric clerkship, 116 fifth-year medical students were given at least ten VPs embedded in eight integrated scenarios and as non-integrated add-ons. The scenarios differed in the sequencing and alignment of VPs and related educational activities, tutor involvement, number of VPs, relevance to assessment and involvement of real patients. We sought students' perceptions on the VP scenarios in focus group interviews with eight groups of 4-7 randomly selected students (n = 39). The interviews were recorded, transcribed and analysed qualitatively. RESULTS: The analysis resulted in six themes reflecting students' perceptions of important features for effective curricular integration of VPs: (i) continuous and stable online access, (ii) increasing complexity, adapted to students' knowledge, (iii) VP-related workload offset by elimination of other activities, (iv) optimal sequencing (e.g.: lecture--1 to 2 VP(s)--tutor-led small group discussion--real patient) and (V) optimal alignment of VPs and educational activities, (vi) inclusion of VP topics in assessment. CONCLUSIONS: The themes appear to offer starting points for the development of a framework to guide the curricular integration of VPs. Their impact needs to be confirmed by studies using quantitative controlled designs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The accuracy of Global Positioning System (GPS) time series is degraded by the presence of offsets. To assess the effectiveness of methods that detect and remove these offsets, we designed and managed the Detection of Offsets in GPS Experiment. We simulated time series that mimicked realistic GPS data consisting of a velocity component, offsets, white and flicker noises (1/f spectrum noises) composed in an additive model. The data set was made available to the GPS analysis community without revealing the offsets, and several groups conducted blind tests with a range of detection approaches. The results show that, at present, manual methods (where offsets are hand picked) almost always give better results than automated or semi‒automated methods (two automated methods give quite similar velocity bias as the best manual solutions). For instance, the fifth percentile range (5% to 95%) in velocity bias for automated approaches is equal to 4.2 mm/year (most commonly ±0.4 mm/yr from the truth), whereas it is equal to 1.8 mm/yr for the manual solutions (most commonly 0.2 mm/yr from the truth). The magnitude of offsets detectable by manual solutions is smaller than for automated solutions, with the smallest detectable offset for the best manual and automatic solutions equal to 5 mm and 8 mm, respectively. Assuming the simulated time series noise levels are representative of real GPS time series, robust geophysical interpretation of individual site velocities lower than 0.2–0.4 mm/yr is therefore certainly not robust, although a limit of nearer 1 mm/yr would be a more conservative choice. Further work to improve offset detection in GPS coordinates time series is required before we can routinely interpret sub‒mm/yr velocities for single GPS stations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atrial fibrillation (AF) is associated with an increased risk of thromboembolism, and is the most prevalent factor for cardioembolic stroke. Vitamin K antagonists (VKAs) have been the standard of care for stroke prevention in patients with AF since the early 1990s. They are very effective for the prevention of cardioembolic stroke, but are limited by factors such as drug-drug interactions, food interactions, slow onset and offset of action, haemorrhage and need for routine anticoagulation monitoring to maintain a therapeutic international normalised ratio (INR). Multiple new oral anticoagulants have been developed as potential replacements for VKAs for stroke prevention in AF. Most are small synthetic molecules that target thrombin (e.g. dabigatran etexilate) or factor Xa (e.g. rivaroxaban, apixaban, edoxaban, betrixaban, YM150). These drugs have predictable pharmacokinetics that allow fixed dosing without routine laboratory monitoring. Dabigatran etexilate, the first of these new oral anticoagulants to be approved by the United States Food and Drug Administration and the European Medicines Agency for stroke prevention in patients with non-valvular AF, represents an effective and safe alternative to VKAs. Under the auspices of the Regional Anticoagulation Working Group, a multidisciplinary group of experts in thrombosis and haemostasis from Central and Eastern Europe, an expert panel with expertise in AF convened to discuss practical, clinically important issues related to the long-term use of dabigatran for stroke prevention in non-valvular AF. The practical information reviewed in this article will help clinicians make appropriate use of this new therapeutic option in daily clinical practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Correct estimation of the firn lock-in depth is essential for correctly linking gas and ice chronologies in ice core studies. Here, two approaches to constrain the firn depth evolution in Antarctica are presented over the last deglaciation: outputs of a firn densification model, and measurements of δ15N of N2 in air trapped in ice core, assuming that δ15N is only affected by gravitational fractionation in the firn column. Since the firn densification process is largely governed by surface temperature and accumulation rate, we have investigated four ice cores drilled in coastal (Berkner Island, BI, and James Ross Island, JRI) and semi-coastal (TALDICE and EPICA Dronning Maud Land, EDML) Antarctic regions. Combined with available ice core air-δ15N measurements from the EPICA Dome C (EDC) site, the studied regions encompass a large range of surface accumulation rates and temperature conditions. Our δ15N profiles reveal a heterogeneous response of the firn structure to glacial–interglacial climatic changes. While firn densification simulations correctly predict TALDICE δ15N variations, they systematically fail to capture the large millennial-scale δ15N variations measured at BI and the δ15N glacial levels measured at JRI and EDML – a mismatch previously reported for central East Antarctic ice cores. New constraints of the EDML gas–ice depth offset during the Laschamp event (~41 ka) and the last deglaciation do not favour the hypothesis of a large convective zone within the firn as the explanation of the glacial firn model–δ15N data mismatch for this site. While we could not conduct an in-depth study of the influence of impurities in snow for firnification from the existing datasets, our detailed comparison between the δ15N profiles and firn model simulations under different temperature and accumulation rate scenarios suggests that the role of accumulation rate may have been underestimated in the current description of firnification models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For atmospheric CO2 reconstructions using ice cores, the technique to release the trapped air from the ice samples is essential for the precision and accuracy of the measurements. We present here a new dry extraction technique in combination with a new gas analytical system that together show significant improvements with respect to current systems. Ice samples (3–15 g) are pulverised using a novel centrifugal ice microtome (CIM) by shaving the ice in a cooled vacuum chamber (−27 °C) in which no friction occurs due to the use of magnetic bearings. Both, the shaving principle of the CIM and the use of magnetic bearings have not been applied so far in this field. Shaving the ice samples produces finer ice powder and releases a minimum of 90% of the trapped air compared to 50%–70% when needle crushing is employed. In addition, the friction-free motion with an optimized design to reduce contaminations of the inner surfaces of the device result in a reduced system offset of about 2.0 ppmv compared to 4.9 ppmv. The gas analytical part shows a higher precision than the corresponding part of our previous system by a factor of two, and all processes except the loading and cleaning of the CIM now run automatically. Compared to our previous system, the complete system shows a 3 times better measurement reproducibility of about 1.1 ppmv (1 σ) which is similar to the best reproducibility of other systems applied in this field. With this high reproducibility, no replicate measurements are required anymore for most future measurement campaigns resulting in a possible output of 12–20 measurements per day compared to a maximum of 6 with other systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Time-based localization techniques such as multilateration are favoured for positioning to wide-band signals. Applying the same techniques with narrow-band signals such as GSM is not so trivial. The process is challenged by the needs of synchronization accuracy and timestamp resolution both in the nanoseconds range. We propose approaches to deal with both challenges. On the one hand, we introduce a method to eliminate the negative effect of synchronization offset on time measurements. On the other hand, we propose timestamps with nanoseconds accuracy by using timing information from the signal processing chain. For a set of experiments, ranging from sub-urban to indoor environments, we show that our proposed approaches are able to improve the localization accuracy of TDOA approaches by several factors. We are even able to demonstrate errors as small as 10 meters for outdoor settings with narrow-band signals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The relationship between trade and culture can be singled-out and deservedly labelled as unique in the discussion of 'trade and ...' issues. The reasons for this exceptional quality lie in the intensity of the relationship, which is indeed most often framed as 'trade versus culture' and has been a significant stumbling block, especially as audiovisual services are concerned, in the Uruguay Round and in the subsequent developments. The second specificity of the relationship is that the international community has organised its efforts in a rather effective manner to offset the lack of satisfying solutions within the framework of the WTO. The legally binding UNESCO Convention on the Protection and Promotion of the Diversity of Cultural Expressions is a clear sign of the potency of the international endeavour, on the one hand, and of the (almost desperate) desire to contest the existing WTO norms in the field of trade and culture, on the other. A third distinctive characteristic of the pair 'trade and culture', which is rarely mentioned and blissfully ignored in any Geneva or Paris talks, is that while the pro-trade and pro-culture opponents have been digging deeper in their respective trenches, the environment where trade and cultural issues are to be regulated has radically changed. The emergence and spread of digital technologies have modified profoundly the conditions for cultural content creation, distribution and access, and rendered some of the associated market failures obsolete, thus mitigating to a substantial degree the 'clash' nature of trade and culture. Against this backdrop, the present paper analyses in a finer-grained manner the move from 'trade and culture' towards 'trade versus culture'. It argues that both the domain of trade and that of culture have suffered from the aspirations to draw clearer lines between the WTO and other trade-related issues, charging the conflict to an extent that leaves few opportunities for practical solutions, which in an advanced digital setting would have been feasible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gaining economic benefits from substantially lower labor costs has been reported as a major reason for offshoring labor-intensive information systems services to low-wage countries. However, if wage differences are so high, why is there such a high level of variation in the economic success between offshored IS projects? This study argues that offshore outsourcing involves a number of extra costs for the ^his paper was recommended for acceptance by Associate Guest Editor Erran Carmel. client organization that account for the economic failure of offshore projects. The objective is to disaggregate these extra costs into their constituent parts and to explain why they differ between offshored software projects. The focus is on software development and maintenance projects that are offshored to Indian vendors. A theoretical framework is developed a priori based on transaction cost economics (TCE) and the knowledge-based view of the firm, comple mented by factors that acknowledge the specific offshore context The framework is empirically explored using a multiple case study design including six offshored software projects in a large German financial service institution. The results of our analysis indicate that the client incurs post contractual extra costs for four types of activities: (1) re quirements specification and design, (2) knowledge transfer, (3) control, and (4) coordination. In projects that require a high level of client-specific knowledge about idiosyncratic business processes and software systems, these extra costs were found to be substantially higher than in projects where more general knowledge was needed. Notably, these costs most often arose independently from the threat of oppor tunistic behavior, challenging the predominant TCE logic of market failure. Rather, the client extra costs were parti cularly high in client-specific projects because the effort for managing the consequences of the knowledge asymmetries between client and vendor was particularly high in these projects. Prior experiences of the vendor with related client projects were found to reduce the level of extra costs but could not fully offset the increase in extra costs in highly client-specific projects. Moreover, cultural and geographic distance between client and vendor as well as personnel turnover were found to increase client extra costs. Slight evidence was found, however, that the cost-increasing impact of these factors was also leveraged in projects with a high level of required client-specific knowledge (moderator effect).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Unlike previously explored relationships between the properties of hot Jovian atmospheres, the geometric albedo and the incident stellar flux do not exhibit a clear correlation, as revealed by our re-analysis of Q0-Q14 Kepler data. If the albedo is primarily associated with the presence of clouds in these irradiated atmospheres, a holistic modeling approach needs to relate the following properties: the strength of stellar irradiation (and hence the strength and depth of atmospheric circulation), the geometric albedo (which controls both the fraction of starlight absorbed and the pressure level at which it is predominantly absorbed), and the properties of the embedded cloud particles (which determine the albedo). The anticipated diversity in cloud properties renders any correlation between the geometric albedo and the stellar flux weak and characterized by considerable scatter. In the limit of vertically uniform populations of scatterers and absorbers, we use an analytical model and scaling relations to relate the temperature-pressure profile of an irradiated atmosphere and the photon deposition layer and to estimate whether a cloud particle will be lofted by atmospheric circulation. We derive an analytical formula for computing the albedo spectrum in terms of the cloud properties, which we compare to the measured albedo spectrum of HD 189733b by Evans et al. Furthermore, we show that whether an optical phase curve is flat or sinusoidal depends on whether the particles are small or large as defined by the Knudsen number. This may be an explanation for why Kepler-7b exhibits evidence for the longitudinal variation in abundance of condensates, while Kepler-12b shows no evidence for the presence of condensates despite the incident stellar flux being similar for both exoplanets. We include an "observer's cookbook" for deciphering various scenarios associated with the optical phase curve, the peak offset of the infrared phase curve, and the geometric albedo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hot Jupiters, due to the proximity to their parent stars, are subjected to a strong irradiating flux that governs their radiative and dynamical properties. We compute a suite of three-dimensional circulation models with dual-band radiative transfer, exploring a relevant range of irradiation temperatures, both with and without temperature inversions. We find that, for irradiation temperatures T irr lsim 2000 K, heat redistribution is very efficient, producing comparable dayside and nightside fluxes. For T irr ≈ 2200-2400 K, the redistribution starts to break down, resulting in a high day-night flux contrast. Our simulations indicate that the efficiency of redistribution is primarily governed by the ratio of advective to radiative timescales. Models with temperature inversions display a higher day-night contrast due to the deposition of starlight at higher altitudes, but we find this opacity-driven effect to be secondary compared to the effects of irradiation. The hotspot offset from the substellar point is large when insolation is weak and redistribution is efficient, and decreases as redistribution breaks down. The atmospheric flow can be potentially subjected to the Kelvin-Helmholtz instability (as indicated by the Richardson number) only in the uppermost layers, with a depth that penetrates down to pressures of a few millibars at most. Shocks penetrate deeper, down to several bars in the hottest model. Ohmic dissipation generally occurs down to deeper levels than shock dissipation (to tens of bars), but the penetration depth varies with the atmospheric opacity. The total dissipated Ohmic power increases steeply with the strength of the irradiating flux and the dissipation depth recedes into the atmosphere, favoring radius inflation in the most irradiated objects. A survey of the existing data, as well as the inferences made from them, reveals that our results are broadly consistent with the observational trends.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Offset printing is a common method to produce large amounts of printed matter. We consider a real-world offset printing process that is used to imprint customer-specific designs on napkin pouches. The print- ing technology used yields a number of specific constraints. The planning problem consists of allocating designs to printing-plate slots such that the given customer demand for each design is fulfilled, all technologi- cal and organizational constraints are met and the total overproduction and setup costs are minimized. We formulate this planning problem as a mixed-binary linear program, and we develop a multi-pass matching-based savings heuristic. We report computational results for a set of problem instances devised from real-world data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this descriptive analysis was to examine sleep timing, circadian phase, and phase angle of entrainment across adolescence in a longitudinal study design. Ninety-four adolescents participated; 38 (21 boys) were 9-10 years ("younger cohort") and 56 (30 boys) were 15-16 years ("older cohort") at the baseline assessment. Participants completed a baseline and then follow-up assessments approximately every six months for 2.5 years. At each assessment, participants wore a wrist actigraph for at least one week at home to measure self-selected sleep timing before salivary dim light melatonin onset (DLMO) phase - a marker of the circadian timing system - was measured in the laboratory. Weekday and weekend sleep onset and offset and weekend-weekday differences were derived from actigraphy. Phase angles were the time durations from DLMO to weekday sleep onset and offset times. Each cohort showed later sleep onset (weekend and weekday), later weekend sleep offset, and later DLMO with age. Weekday sleep offset shifted earlier with age in the younger cohort and later in the older cohort after age 17. Weekend-weekday sleep offset differences increased with age in the younger cohort and decreased in the older cohort after age 17. DLMO to sleep offset phase angle narrowed with age in the younger cohort and became broader in the older cohort. The older cohort had a wider sleep onset phase angle compared to the younger cohort; however, an age-related phase angle increase was seen in the younger cohort only. Individual differences were seen in these developmental trajectories. This descriptive study indicated that circadian phase and self-selected sleep delayed across adolescence, though school-day sleep offset advanced until no longer in high school, whereupon offset was later. Phase angle changes are described as an interaction of developmental changes in sleep regulation interacting with psychosocial factors (e.g., bedtime autonomy)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aims to evaluate the potential for impacts of ocean acidification on North Atlantic deep-sea ecosystems in response to IPCC AR5 Representative Concentration Pathways (RCPs). Deep-sea biota is likely highly vulnerable to changes in seawater chemistry and sensitive to moderate excursions in pH. Here we show, from seven fully coupled Earth system models, that for three out of four RCPs over 17% of the seafloor area below 500 m depth in the North Atlantic sector will experience pH reductions exceeding −0.2 units by 2100. Increased stratification in response to climate change partially alleviates the impact of ocean acidification on deep benthic environments. We report on major pH reductions over the deep North Atlantic seafloor (depth >500 m) and at important deep-sea features, such as seamounts and canyons. By 2100, and under the high CO2 scenario RCP8.5, pH reductions exceeding −0.2 (−0.3) units are projected in close to 23% (~15%) of North Atlantic deep-sea canyons and ~8% (3%) of seamounts – including seamounts proposed as sites of marine protected areas. The spatial pattern of impacts reflects the depth of the pH perturbation and does not scale linearly with atmospheric CO2 concentration. Impacts may cause negative changes of the same magnitude or exceeding the current target of 10% of preservation of marine biomes set by the convention on biological diversity, implying that ocean acidification may offset benefits from conservation/management strategies relying on the regulation of resource exploitation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract The current treatment of painful hip dysplasia in the mature skeleton is based on acetabular reorientation. Reorientation procedures attempt to optimize the anatomic position of the hyaline cartilage of the femoral head and acetabulum in regard to mechanical loading. Because the Bernese periacetabular osteotomy is a versatile technique for acetabular reorientation, it is helpful to understand the approach and be familiar with the criteria for an optimal surgical correction. The femoral side bears stigmata of hip dysplasia that may require surgical correction. Improvement of the head-neck offset to avoid femoroacetabular impingement has become routine in many hips treated with periacetabular osteotomy. In addition, intertrochanteric osteotomies can help improve joint congruency and normalize the femoral neck orientation. Other new surgical techniques allow trimming or reducing a severely deformed head, performing a relative neck lengthening, and trimming or distalizing the greater trochanter.  An increasing number of studies have reported good long-term results after acetabular reorientation procedures, with expected joint preservation rates ranging from 80% to 90% at the 10-year follow-up and 60% to 70% at the 20-year follow-up. An ideal candidate is younger than 30 years, with no preoperative signs of osteoarthritis. Predicted joint preservation in these patients is approximately 90% at the 20-year follow-up. Recent evidence indicates that additional correction of an aspheric head may further improve results.