943 resultados para Jernström Offset
Resumo:
For atmospheric CO2 reconstructions using ice cores, the technique to release the trapped air from the ice samples is essential for the precision and accuracy of the measurements. We present here a new dry extraction technique in combination with a new gas analytical system that together show significant improvements with respect to current systems. Ice samples (3–15 g) are pulverised using a novel centrifugal ice microtome (CIM) by shaving the ice in a cooled vacuum chamber (−27 °C) in which no friction occurs due to the use of magnetic bearings. Both, the shaving principle of the CIM and the use of magnetic bearings have not been applied so far in this field. Shaving the ice samples produces finer ice powder and releases a minimum of 90% of the trapped air compared to 50%–70% when needle crushing is employed. In addition, the friction-free motion with an optimized design to reduce contaminations of the inner surfaces of the device result in a reduced system offset of about 2.0 ppmv compared to 4.9 ppmv. The gas analytical part shows a higher precision than the corresponding part of our previous system by a factor of two, and all processes except the loading and cleaning of the CIM now run automatically. Compared to our previous system, the complete system shows a 3 times better measurement reproducibility of about 1.1 ppmv (1 σ) which is similar to the best reproducibility of other systems applied in this field. With this high reproducibility, no replicate measurements are required anymore for most future measurement campaigns resulting in a possible output of 12–20 measurements per day compared to a maximum of 6 with other systems.
Resumo:
Ciliary locomotion in the nudibranch mollusk Hermissenda is modulated by the visual and graviceptive systems. Components of the neural network mediating ciliary locomotion have been identified including aggregates of polysensory interneurons that receive monosynaptic input from identified photoreceptors and efferent neurons that activate cilia. Illumination produces an inhibition of type I(i) (off-cell) spike activity, excitation of type I(e) (on-cell) spike activity, decreased spike activity in type III(i) inhibitory interneurons, and increased spike activity of ciliary efferent neurons. Here we show that pairs of type I(i) interneurons and pairs of type I(e) interneurons are electrically coupled. Neither electrical coupling or synaptic connections were observed between I(e) and I(i) interneurons. Coupling is effective in synchronizing dark-adapted spontaneous firing between pairs of I(e) and pairs of I(i) interneurons. Out-of-phase burst activity, occasionally observed in dark-adapted and light-adapted pairs of I(e) and I(i) interneurons, suggests that they receive synaptic input from a common presynaptic source or sources. Rhythmic activity is typically not a characteristic of dark-adapted, light-adapted, or light-evoked firing of type I interneurons. However, burst activity in I(e) and I(i) interneurons may be elicited by electrical stimulation of pedal nerves or generated at the offset of light. Our results indicate that type I interneurons can support the generation of both rhythmic activity and changes in tonic firing depending on sensory input. This suggests that the neural network supporting ciliary locomotion may be multifunctional. However, consistent with the nonmuscular and nonrhythmic characteristics of visually modulated ciliary locomotion, type I interneurons exhibit changes in tonic activity evoked by illumination.
Resumo:
BACKGROUND: It is unclear whether aggressive phototherapy to prevent neurotoxic effects of bilirubin benefits or harms infants with extremely low birth weight (1000 g or less). METHODS: We randomly assigned 1974 infants with extremely low birth weight at 12 to 36 hours of age to undergo either aggressive or conservative phototherapy. The primary outcome was a composite of death or neurodevelopmental impairment determined for 91% of the infants by investigators who were unaware of the treatment assignments. RESULTS: Aggressive phototherapy, as compared with conservative phototherapy, significantly reduced the mean peak serum bilirubin level (7.0 vs. 9.8 mg per deciliter [120 vs. 168 micromol per liter], P<0.01) but not the rate of the primary outcome (52% vs. 55%; relative risk, 0.94; 95% confidence interval [CI], 0.87 to 1.02; P=0.15). Aggressive phototherapy did reduce rates of neurodevelopmental impairment (26%, vs. 30% for conservative phototherapy; relative risk, 0.86; 95% CI, 0.74 to 0.99). Rates of death in the aggressive-phototherapy and conservative-phototherapy groups were 24% and 23%, respectively (relative risk, 1.05; 95% CI, 0.90 to 1.22). In preplanned subgroup analyses, the rates of death were 13% with aggressive phototherapy and 14% with conservative phototherapy for infants with a birth weight of 751 to 1000 g and 39% and 34%, respectively (relative risk, 1.13; 95% CI, 0.96 to 1.34), for infants with a birth weight of 501 to 750 g. CONCLUSIONS: Aggressive phototherapy did not significantly reduce the rate of death or neurodevelopmental impairment. The rate of neurodevelopmental impairment alone was significantly reduced with aggressive phototherapy. This reduction may be offset by an increase in mortality among infants weighing 501 to 750 g at birth. (ClinicalTrials.gov number, NCT00114543.)
Resumo:
We seek to determine the relationship between threshold and suprathreshold perception for position offset and stereoscopic depth perception under conditions that elevate their respective thresholds. Two threshold-elevating conditions were used: (1) increasing the interline gap and (2) dioptric blur. Although increasing the interline gap increases position (Vernier) offset and stereoscopic disparity thresholds substantially, the perception of suprathreshold position offset and stereoscopic depth remains unchanged. Perception of suprathreshold position offset also remains unchanged when the Vernier threshold is elevated by dioptric blur. We show that such normalization of suprathreshold position offset can be attributed to the topographical-map-based encoding of position. On the other hand, dioptric blur increases the stereoscopic disparity thresholds and reduces the perceived suprathreshold stereoscopic depth, which can be accounted for by a disparity-computation model in which the activities of absolute disparity encoders are multiplied by a Gaussian weighting function that is centered on the horopter. Overall, the statement "equal suprathreshold perception occurs in threshold-elevated and unelevated conditions when the stimuli are equally above their corresponding thresholds" describes the results better than the statement "suprathreshold stimuli are perceived as equal when they are equal multiples of their respective threshold values."
Resumo:
Subfields of the hippocampus display differential dynamics in processing a spatial environment, especially when changes are introduced to the environment. Specifically, when familiar cues in the environment are spatially rearranged, place cells in the CA3 subfield tend to rotate with a particular set of cues (e.g., proximal cues), maintaining a coherent spatial representation. Place cells in CA1, in contrast, display discordant behaviors (e.g., rotating with different sets of cues or remapping) in the same condition. In addition, on average, CA3 place cells shift their firing locations (measured by the center of mass, or COM) backward over time when the animal encounters the changed environment for the first time, but not after that first experience. However, CA1 displays an opposite pattern, in which place cells exhibit the backward COM-shift only from the second day of experience, but not on the first day. Here, we examined the relationship between the environment-representing behavior (i.e., rotation vs. remapping) and the COM-shift of place fields in CA1 and CA3. Both in CA1 and CA3, the backward (as well as forward) COM-shift phenomena occurred regardless of the rotating versus remapping of the place cell. The differential, daily time course of the onset/offset of backward COM-shift in the cue-altered environment in CA1 and CA3 (on day 1 in CA1 and from day 2 onward in CA3) stems from different population dynamics between the subfields. The results suggest that heterogeneous, complex plasticity mechanisms underlie the environment-representating behavior (i.e., rotate/remap) and the COM-shifting behavior of the place cell.
Resumo:
Time-based localization techniques such as multilateration are favoured for positioning to wide-band signals. Applying the same techniques with narrow-band signals such as GSM is not so trivial. The process is challenged by the needs of synchronization accuracy and timestamp resolution both in the nanoseconds range. We propose approaches to deal with both challenges. On the one hand, we introduce a method to eliminate the negative effect of synchronization offset on time measurements. On the other hand, we propose timestamps with nanoseconds accuracy by using timing information from the signal processing chain. For a set of experiments, ranging from sub-urban to indoor environments, we show that our proposed approaches are able to improve the localization accuracy of TDOA approaches by several factors. We are even able to demonstrate errors as small as 10 meters for outdoor settings with narrow-band signals.
Resumo:
The relationship between trade and culture can be singled-out and deservedly labelled as unique in the discussion of 'trade and ...' issues. The reasons for this exceptional quality lie in the intensity of the relationship, which is indeed most often framed as 'trade versus culture' and has been a significant stumbling block, especially as audiovisual services are concerned, in the Uruguay Round and in the subsequent developments. The second specificity of the relationship is that the international community has organised its efforts in a rather effective manner to offset the lack of satisfying solutions within the framework of the WTO. The legally binding UNESCO Convention on the Protection and Promotion of the Diversity of Cultural Expressions is a clear sign of the potency of the international endeavour, on the one hand, and of the (almost desperate) desire to contest the existing WTO norms in the field of trade and culture, on the other. A third distinctive characteristic of the pair 'trade and culture', which is rarely mentioned and blissfully ignored in any Geneva or Paris talks, is that while the pro-trade and pro-culture opponents have been digging deeper in their respective trenches, the environment where trade and cultural issues are to be regulated has radically changed. The emergence and spread of digital technologies have modified profoundly the conditions for cultural content creation, distribution and access, and rendered some of the associated market failures obsolete, thus mitigating to a substantial degree the 'clash' nature of trade and culture. Against this backdrop, the present paper analyses in a finer-grained manner the move from 'trade and culture' towards 'trade versus culture'. It argues that both the domain of trade and that of culture have suffered from the aspirations to draw clearer lines between the WTO and other trade-related issues, charging the conflict to an extent that leaves few opportunities for practical solutions, which in an advanced digital setting would have been feasible.
Resumo:
Gaining economic benefits from substantially lower labor costs has been reported as a major reason for offshoring labor-intensive information systems services to low-wage countries. However, if wage differences are so high, why is there such a high level of variation in the economic success between offshored IS projects? This study argues that offshore outsourcing involves a number of extra costs for the ^his paper was recommended for acceptance by Associate Guest Editor Erran Carmel. client organization that account for the economic failure of offshore projects. The objective is to disaggregate these extra costs into their constituent parts and to explain why they differ between offshored software projects. The focus is on software development and maintenance projects that are offshored to Indian vendors. A theoretical framework is developed a priori based on transaction cost economics (TCE) and the knowledge-based view of the firm, comple mented by factors that acknowledge the specific offshore context The framework is empirically explored using a multiple case study design including six offshored software projects in a large German financial service institution. The results of our analysis indicate that the client incurs post contractual extra costs for four types of activities: (1) re quirements specification and design, (2) knowledge transfer, (3) control, and (4) coordination. In projects that require a high level of client-specific knowledge about idiosyncratic business processes and software systems, these extra costs were found to be substantially higher than in projects where more general knowledge was needed. Notably, these costs most often arose independently from the threat of oppor tunistic behavior, challenging the predominant TCE logic of market failure. Rather, the client extra costs were parti cularly high in client-specific projects because the effort for managing the consequences of the knowledge asymmetries between client and vendor was particularly high in these projects. Prior experiences of the vendor with related client projects were found to reduce the level of extra costs but could not fully offset the increase in extra costs in highly client-specific projects. Moreover, cultural and geographic distance between client and vendor as well as personnel turnover were found to increase client extra costs. Slight evidence was found, however, that the cost-increasing impact of these factors was also leveraged in projects with a high level of required client-specific knowledge (moderator effect).
Resumo:
Geodetic observations show several large, sudden increases in flow speed at Helheim Glacier, one of Greenland's largest outlet glaciers, during summer, 2007. These step-like accelerations, detected along the length of the glacier, coincide with teleseismically detected glacial earthquakes and major iceberg calving events. No coseismic offset in the position of the glacier surface is observed; instead, modest tsunamis associated with the glacial earthquakes implicate glacier calving in the seismogenic process. Our results link changes in glacier velocity directly to calving-front behavior at Greenland's largest outlet glaciers, on timescales as short as minutes to hours, and clarify the mechanism by which glacial earthquakes occur. Citation: Nettles, M., et al. (2008), Step-wise changes in glacier flow speed coincide with calving and glacial earthquakes at Helheim Glacier, Greenland, Geophys. Res. Lett., 35, L24503, doi: 10.1029/2008GL036127.
Resumo:
The isotopic and chemical signatures for ice-age and Holocene ice from Summit, Greenland and Penny Ice Cap, Baffin Island, Canada, arc compared. The usual pattern of low delta(18)O, high Ca2+ and high Cl- is presented in the Summit records, but Penny Ice Cap has lower than present Cl- in its ice-age ice. A simple extension of the Hansson model (Hansson, 1994) is developed and used to simulate these signatures. The low ice-age Cl- from Penny Ice Cap is explained by having the ice-age ice originating many thousands of km inland near the centre of the Laurentide ice sheet and much further from the marine sources. Summit's flowlines all start close to the present site. The Penny Ice Cap early-Holocene delta(18)O's had to be corrected to offset the Laurentide meltwater distortion. The analysis suggests that presently the Summit and Penny Ice Cap marine impurity originates about,500 km away, and that presently Penny Ice Cap receives a significant amount of local continental impurity.
Resumo:
We performed surface and borehole ground penetrating radar (GPR) tests, together with moisture probe measurements and direct gas sampling to detect areas of biogenic gas accumulation in a northern peatland. The main findings are: (1) shadow zones (signal scattering) observed in surface GPR correlate with areas of elevated CH4 and CO2 concentration; (2) high velocities in zero offset profiles and lower water content inferred from moisture probes correlate with surface GPR shadow zones; (3) zero offset profiles depict depth variable gas accumulation from 0-10% by volume; (4) strong reflectors may represent confining layers restricting upward gas migration. Our results have implications for defining the spatial distribution, volume and movement of biogenic gas in peatlands at multiple scales.
Resumo:
I solved equations that describe coupled hydrolysis in and absorption from a continuously stirred tank reactor (CSTR), a plug flow reactor (PFR), and a batch reactor (BR) for the rate of ingestion and/or the throughput time that maximizes the rate of absorption (=gross rate of gain from digestion). Predictions are that foods requiring a single hydrolytic step (e.g., disaccharides) yield ingestion rates that vary inversely with the concentration of food substrate ingested, whereas foods that require multiple hydrolytic and absorptive reactions proceeding in parallel (e.g., proteins) yield maximal ingestion rates at intermediate substrate concentrations. Counterintuitively, then, animals acting to maximize their absorption rates should show compensatory ingestion (more rapid feeding on food of lower concentration), except for the lower range of diet quality fur complex diets and except for animals that show purely linear (passive) uptake. At their respective maxima in absorption rates, the PFR and BR yield only modestly higher rates of gain than the CSTR but do so at substantially lower rates of ingestion. All three ideal reactors show milder than linear reduction in rate of absorption when throughput or holding time in the gut is increased (e.g., by scarcity or predation hazard); higher efficiency of hydrolysis and extraction offset lower intake. Hence adding feeding costs and hazards of predation is likely to slow ingestion rates and raise absorption efficiencies substantially over the cost-free optima found here.
Resumo:
Unlike previously explored relationships between the properties of hot Jovian atmospheres, the geometric albedo and the incident stellar flux do not exhibit a clear correlation, as revealed by our re-analysis of Q0-Q14 Kepler data. If the albedo is primarily associated with the presence of clouds in these irradiated atmospheres, a holistic modeling approach needs to relate the following properties: the strength of stellar irradiation (and hence the strength and depth of atmospheric circulation), the geometric albedo (which controls both the fraction of starlight absorbed and the pressure level at which it is predominantly absorbed), and the properties of the embedded cloud particles (which determine the albedo). The anticipated diversity in cloud properties renders any correlation between the geometric albedo and the stellar flux weak and characterized by considerable scatter. In the limit of vertically uniform populations of scatterers and absorbers, we use an analytical model and scaling relations to relate the temperature-pressure profile of an irradiated atmosphere and the photon deposition layer and to estimate whether a cloud particle will be lofted by atmospheric circulation. We derive an analytical formula for computing the albedo spectrum in terms of the cloud properties, which we compare to the measured albedo spectrum of HD 189733b by Evans et al. Furthermore, we show that whether an optical phase curve is flat or sinusoidal depends on whether the particles are small or large as defined by the Knudsen number. This may be an explanation for why Kepler-7b exhibits evidence for the longitudinal variation in abundance of condensates, while Kepler-12b shows no evidence for the presence of condensates despite the incident stellar flux being similar for both exoplanets. We include an "observer's cookbook" for deciphering various scenarios associated with the optical phase curve, the peak offset of the infrared phase curve, and the geometric albedo.
Resumo:
Hot Jupiters, due to the proximity to their parent stars, are subjected to a strong irradiating flux that governs their radiative and dynamical properties. We compute a suite of three-dimensional circulation models with dual-band radiative transfer, exploring a relevant range of irradiation temperatures, both with and without temperature inversions. We find that, for irradiation temperatures T irr lsim 2000 K, heat redistribution is very efficient, producing comparable dayside and nightside fluxes. For T irr ≈ 2200-2400 K, the redistribution starts to break down, resulting in a high day-night flux contrast. Our simulations indicate that the efficiency of redistribution is primarily governed by the ratio of advective to radiative timescales. Models with temperature inversions display a higher day-night contrast due to the deposition of starlight at higher altitudes, but we find this opacity-driven effect to be secondary compared to the effects of irradiation. The hotspot offset from the substellar point is large when insolation is weak and redistribution is efficient, and decreases as redistribution breaks down. The atmospheric flow can be potentially subjected to the Kelvin-Helmholtz instability (as indicated by the Richardson number) only in the uppermost layers, with a depth that penetrates down to pressures of a few millibars at most. Shocks penetrate deeper, down to several bars in the hottest model. Ohmic dissipation generally occurs down to deeper levels than shock dissipation (to tens of bars), but the penetration depth varies with the atmospheric opacity. The total dissipated Ohmic power increases steeply with the strength of the irradiating flux and the dissipation depth recedes into the atmosphere, favoring radius inflation in the most irradiated objects. A survey of the existing data, as well as the inferences made from them, reveals that our results are broadly consistent with the observational trends.
Resumo:
Offset printing is a common method to produce large amounts of printed matter. We consider a real-world offset printing process that is used to imprint customer-specific designs on napkin pouches. The print- ing technology used yields a number of specific constraints. The planning problem consists of allocating designs to printing-plate slots such that the given customer demand for each design is fulfilled, all technologi- cal and organizational constraints are met and the total overproduction and setup costs are minimized. We formulate this planning problem as a mixed-binary linear program, and we develop a multi-pass matching-based savings heuristic. We report computational results for a set of problem instances devised from real-world data.