34 resultados para rotated to zero
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Surface platforms were engineered from poly(L-lysine)-graft-poly(2-methyl-2-oxazoline) (PLL-g-PMOXA) copolymers to study the mechanisms involved in the non-specific adhesion of Escherichia coli (E. coli) bacteria. Copolymers with three different grafting densities (PMOXA chains/Lysine residue of 0.09, 0.33 and 0.56) were synthesized and assembled on niobia (Nb O ) surfaces. PLL-modified and bare niobia surfaces served as controls. To evaluate the impact of fimbriae expression on the bacterial adhesion, the surfaces were exposed to genetically engineered E. coli strains either lacking, or constitutively expressing type 1 fimbriae. The bacterial adhesion was strongly influenced by the presence of bacterial fimbriae. Non-fimbriated bacteria behaved like hard, charged particles whose adhesion was dependent on surface charge and ionic strength of the media. In contrast, bacteria expressing type 1 fimbriae adhered to the substrates independent of surface charge and ionic strength, and adhesion was mediated by non-specific van der Waals and hydrophobic interactions of the proteins at the fimbrial tip. Adsorbed polymer mass, average surface density of the PMOXA chains, and thickness of the copolymer films were quantified by optical waveguide lightmode spectroscopy (OWLS) and variable-angle spectroscopic ellipsometry (VASE), whereas the lateral homogeneity was probed by time-of-flight secondary ion mass spectrometry (ToF-SIMS). Streaming current measurements provided information on the charge formation of the polymer-coated and the bare niobia surfaces. The adhesion of both bacterial strains could be efficiently inhibited by the copolymer film only with a grafting density of 0.33 characterized by the highest PMOXA chain surface density and a surface potential close to zero.
Resumo:
Ocean acidification might reduce the ability of calcifying plankton to produce and maintain their shells of calcite, or of aragonite, the more soluble form of CaCO3. In addition to possibly large biological impacts, reduced CaCO3 production corresponds to a negative feedback on atmospheric CO2. In order to explore the sensitivity of the ocean carbon cycle to increasing concentrations of atmospheric CO2, we use the new biogeochemical Bern3D/PISCES model. The model reproduces the large scale distributions of biogeochemical tracers. With a range of sensitivity studies, we explore the effect of (i) using different parameterizations of CaCO3 production fitted to available laboratory and field experiments, of (ii) letting calcite and aragonite be produced by auto- and heterotrophic plankton groups, and of (iii) using carbon emissions from the range of the most recent IPCC Representative Concentration Pathways (RCP). Under a high-emission scenario, the CaCO3 production of all the model versions decreases from ~1 Pg C yr1 to between 0.36 and 0.82 Pg C yr1 by the year 2100. The changes in CaCO3 production and dissolution resulting from ocean acidification provide only a small feedback on atmospheric CO2 of 1 to 11 ppm by the year 2100, despite the wide range of parameterizations, model versions and scenarios included in our study. A potential upper limit of the CO2-calcification/dissolution feedback of 30 ppm by the year 2100 is computed by setting calcification to zero after 2000 in a high 21st century emission scenario. The similarity of feedback estimates yielded by the model version with calcite produced by nanophytoplankton and the one with calcite, respectively aragonite produced by mesozooplankton suggests that expending biogeochemical models to calcifying zooplankton might not be needed to simulate biogeochemical impacts on the marine carbonate cycle. The changes in saturation state confirm previous studies indicating that future anthropogenic CO2 emissions may lead to irreversible changes in A for several centuries. Furthermore, due to the long-term changes in the deep ocean, the ratio of open water CaCO3 dissolution to production stabilizes by the year 2500 at a value that is 3050% higher than at pre-industrial times when carbon emissions are set to zero after 2100.
Resumo:
Unique contributions of Big Five personality factors to academic performance in young elementary school children were explored. Extraversion and Openness (labeled Culture in our study) uniquely contributed to academic performance, over and above the contribution of executive functions in first and second grade children (N = 446). Well established associations between Conscientiousness and academic performance, however, could only be replicated with regard to zero-order correlations. Executive functions (inhibition, updating, and shifting), for their part, proved to be powerful predictors of academic performance. Results were to some extent dependent on the criterion with which academic performance was measured: Both personality factors had stronger effects on grades than on standardized achievement tests, whereas the opposite was true for executive functions. Finally, analyses on gender differences revealed that Extraversion and Openness/Culture played a more dominant role in girls than in boys, but only regarding grades.
Resumo:
We calculate the O(s) corrections to the double differential decay width d77/(ds1ds2) for the process BXs, originating from diagrams involving the electromagnetic dipole operator O7. The kinematical variables s1 and s2 are defined as si=(pbqi)2/m2b, where pb, q1, q2 are the momenta of the b quark and two photons. We introduce a nonzero mass ms for the strange quark to regulate configurations where the gluon or one of the photons become collinear with the strange quark and retain terms which are logarithmic in ms, while discarding terms which go to zero in the limit ms0. When combining virtual and bremsstrahlung corrections, the infrared and collinear singularities induced by soft and/or collinear gluons drop out. By our cuts the photons do not become soft, but one of them can become collinear with the strange quark. This implies that in the final result a single logarithm of ms survives. In principle, the configurations with collinear photon emission could be treated using fragmentation functions. In a related work we find that similar results can be obtained when simply interpreting ms appearing in the final result as a constituent mass. We do so in the present paper and vary ms between 400 and 600 MeV in the numerics. This work extends a previous paper by us, where only the leading power terms with respect to the (normalized) hadronic mass s3=(pbq1q2)2/m2b were taken into account in the underlying triple differential decay width d77/(ds1ds2ds3).
Resumo:
Stepwise uncertainty reduction (SUR) strategies aim at constructing a sequence of points for evaluating a function f in such a way that the residual uncertainty about a quantity of interest progressively decreases to zero. Using such strategies in the framework of Gaussian process modeling has been shown to be efficient for estimating the volume of excursion of f above a fixed threshold. However, SUR strategies remain cumbersome to use in practice because of their high computational complexity, and the fact that they deliver a single point at each iteration. In this article we introduce several multipoint sampling criteria, allowing the selection of batches of points at which f can be evaluated in parallel. Such criteria are of particular interest when f is costly to evaluate and several CPUs are simultaneously available. We also manage to drastically reduce the computational cost of these strategies through the use of closed form formulas. We illustrate their performances in various numerical experiments, including a nuclear safety test case. Basic notions about kriging, auxiliary problems, complexity calculations, R code, and data are available online as supplementary materials.
Resumo:
During the generalization of epileptic seizures, pathological activity in one brain area recruits distant brain structures into joint synchronous discharges. However, it remains unknown whether specific changes in local circuit activity are related to the aberrant recruitment of anatomically distant structures into epileptiform discharges. Further, it is not known whether aberrant areas recruit or entrain healthy ones into pathological activity. Here we study the dynamics of local circuit activity during the spread of epileptiform discharges in the zero-magnesium in vitro model of epilepsy. We employ high-speed multi-photon imaging in combination with dual whole-cell recordings in acute thalamocortical (TC) slices of the juvenile mouse to characterize the generalization of epileptic activity between neocortex and thalamus. We find that, although both structures are exposed to zero-magnesium, the initial onset of focal epileptiform discharge occurs in cortex. This suggests that local recurrent connectivity that is particularly prevalent in cortex is important for the initiation of seizure activity. Subsequent recruitment of thalamus into joint, generalized discharges is coincident with an increase in the coherence of local cortical circuit activity that itself does not depend on thalamus. Finally, the intensity of population discharges is positively correlated between both brain areas. This suggests that during and after seizure generalization not only the timing but also the amplitude of epileptiform discharges in thalamus is entrained by cortex. Together these results suggest a central role of neocortical activity for the onset and the structure of pathological recruitment of thalamus into joint synchronous epileptiform discharges.
Resumo:
An experiment was conducted to determine the effect of grazing versus zero-grazing on energy expenditure (EE), feeding behaviour and physical activity in dairy cows at different stages of lactation. Fourteen Holstein cows were subjected to two treatments in a repeated crossover design with three experimental series (S1, S2, and S3) reflecting increased days in milk (DIM). At the beginning of each series, cows were on average at 38, 94 and 171 (standard deviation (SD) 10.8) DIM, respectively. Each series consisted of two periods containing a 7-d adaptation and a 7-d collection period each. Cows either grazed on pasture for 1618.5h per day or were kept in a freestall barn and had ad libitum access to herbage harvested from the same paddock. Herbage intake was estimated using the double alkane technique. On each day of the collection period, EE of one cow in the barn and of one cow on pasture was determined for 6h by using the 13C bicarbonate dilution technique, with blood sample collection done either manually in the barn or using an automatic sampling system on pasture. Furthermore, during each collection period physical activity and feeding behaviour of cows were recorded over 3d using pedometers and behaviour recorders. Milk yield decreased with increasing DIM (P<0.001) but was similar with both treatments. Herbage intake was lower (P<0.01) for grazing cows (16.8kgdry matter (DM)/d) compared to zero-grazing cows (18.9kgDM/d). The lowest (P<0.001) intake was observed in S1 and similar intakes were observed in S2 and S3. Within the 6-h measurement period, grazing cows expended 19% more (P<0.001) energy (319 versus 269kJ/kg metabolic body size (BW0.75)) than zero-grazing cows and differences in EE did not change with increasing DIM. Grazing cows spent proportionally more (P<0.001) time walking and less time standing (P<0.001) and lying (P<0.05) than zero-grazing cows. The proportion of time spent eating was greater (P<0.001) and that of time spent ruminating was lower (P<0.05) for grazing cows compared to zero-grazing cows. In conclusion, lower feed intake along with the unchanged milk production indicates that grazing cows mobilized body reserves to cover additional energy requirements which were at least partly caused by more physical activity. However, changes in cows behaviour between the considered time points during lactation were too small so that differences in EE remained similar between treatments with increasing DIM.
Resumo:
Vertebral compression fracture is a common medical problem in osteoporotic individuals. The quantitative computed tomography (QCT)-based finite element (FE) method may be used to predict vertebral strength in vivo, but needs to be validated with experimental tests. The aim of this study was to validate a nonlinear anatomy specific QCT-based FE model by using a novel testing setup. Thirty-seven human thoracolumbar vertebral bone slices were prepared by removing cortical endplates and posterior elements. The slices were scanned with QCT and the volumetric bone mineral density (vBMD) was computed with the standard clinical approach. A novel experimental setup was designed to induce a realistic failure in the vertebral slices in vitro. Rotation of the loading plate was allowed by means of a ball joint. To minimize device compliance, the specimen deformation was measured directly on the loading plate with three sensors. A nonlinear FE model was generated from the calibrated QCT images and computed vertebral stiffness and strength were compared to those measured during the experiments. In agreement with clinical observations, most of the vertebrae underwent an anterior wedge-shape fracture. As expected, the FE method predicted both stiffness and strength better than vBMD (R2 improved from 0.27 to 0.49 and from 0.34 to 0.79, respectively). Despite the lack of fitting parameters, the linear regression of the FE prediction for strength was close to the 1:1 relation (slope and intercept close to one (0.86 kN) and to zero (0.72 kN), respectively). In conclusion, a nonlinear FE model was successfully validated through a novel experimental technique for generating wedge-shape fractures in human thoracolumbar vertebrae.
Resumo:
Surgical repair of the rotator cuff repair is one of the most common procedures in orthopedic surgery. Despite it being the focus of much research, the physiological tendon-bone insertion is not recreated following repair and there is an anatomic non-healing rate of up to 94%. During the healing phase, several growth factors are upregulated that induce cellular proliferation and matrix deposition. Subsequently, this provisional matrix is replaced by the definitive matrix. Leukocyte- and platelet-rich fibrin (L-PRF) contain growth factors and has a stable dense fibrin matrix. Therefore, use of LPRF in rotator cuff repair is theoretically attractive. The aim of the present study was to determine 1) the optimal protocol to achieve the highest leukocyte content; 2) whether L-PRF releases growth factors in a sustained manner over 28 days; 3) whether standard/gelatinous or dry/compressed matrix preparation methods result in higher growth factor concentrations. 1) The standard L-PRF centrifugation protocol with 400 x g showed the highest concentration of platelets and leukocytes. 2) The L-PRF clots cultured in medium showed a continuous slow release with an increase in the absolute release of growth factors TGF-1, VEGF and MPO in the first 7 days, and for IGF1, PDGF-AB and platelet activity (PF4=CXCL4) in the first 8 hours, followed by a decrease to close to zero at 28 days. Significantly higher levels of growth factor were expressed relative to the control values of normal blood at each culture time point. 3) Except for MPO and the TGF-1, there was always a tendency towards higher release of growth factors (i.e., CXCL4, IGF-1, PDGF-AB, and VEGF) in the standard/gelatinous- compared to the dry/compressed group. L-PRF in its optimal standard/gelatinous-type matrix can store and deliver locally specific healing growth factors for up to 28 days and may be a useful adjunct in rotator cuff repair.
Resumo:
Multislice-computed tomography (MSCT) and magnetic resonance imaging (MRI) are increasingly used for forensic purposes. Based on broad experience in clinical neuroimaging, post-mortem MSCT and MRI were performed in 57 forensic cases with the goal to evaluate the radiological methods concerning their usability for forensic head and brain examination. An experienced clinical radiologist evaluated the imaging data. The results were compared to the autopsy findings that served as the gold standard with regard to common forensic neurotrauma findings such as skull fractures, soft tissue lesions of the scalp, various forms of intracranial hemorrhage or signs of increased brain pressure. The sensitivity of the imaging methods ranged from 100% (e.g., heat-induced alterations, intracranial gas) to zero (e.g., mediobasal impression marks as a sign of increased brain pressure, plaques jaunes). The agreement between MRI and CT was 69%. The radiological methods prevalently failed in the detection of lesions smaller than 3mm of size, whereas they were generally satisfactory concerning the evaluation of intracranial hemorrhage. Due to its advanced 2D and 3D post-processing possibilities, CT in particular possessed certain advantages in comparison with autopsy with regard to forensic reconstruction. MRI showed forensically relevant findings not seen during autopsy in several cases. The partly limited sensitivity of imaging that was observed in this retrospective study was based on several factors: besides general technical limitations it became apparent that clinical radiologists require a sound basic forensic background in order to detect specific signs. Focused teaching sessions will be essential to improve the outcome in future examinations. On the other hand, the autopsy protocols should be further standardized to allow an exact comparison of imaging and autopsy data. In consideration of these facts, MRI and CT have the power to play an important role in future forensic neuropathological examination.
Resumo:
This paper asks how takeover and failure hazards change as listed firms get older. The hypothesis is that they increase because firms gradually run out of growth opportunities. We find the opposite. Both takeover and failure hazard drop significantly with age. The decline in takeover hazard can be explained with Loderer, Stulz, and Waelchlis (2013) buggy whip makers hypothesis: Because old firms are comparatively well-managed and are affected by limited agency problems, on average, they offer little value added potential to acquirers. Failure hazard drops because to learning. The results are robust to various alternative interpretations and cannot be explained by unobserved heterogeneity. While hazards decline with age, they do not go to zero. This explains why, eventually, all listed firms disappear
Resumo:
This paper asks how takeover and failure hazards change as listed firms get older. The hypothesis is that they increase because firms gradually run out of growth opportunities. We find the opposite. Both takeover and failure hazard drop significantly with age. The decline in takeover hazard can be explained with Loderer, Stulz, and Waelchlis (2013) buggy whip makers hypothesis: Because old firms are comparatively well-managed and are affected by limited agency problems, on average, they offer little value added potential to acquirers. Failure hazard drops because to learning. The results are robust to various alternative interpretations and cannot be explained by unobserved heterogeneity. While hazards decline with age, they do not go to zero. This explains why, eventually, all listed firms disappear
Resumo:
This paper asks how takeover and failure hazards change as listed firms get older. The hypothesis is that they increase because firms gradually run out of growth opportunities. We find the opposite. Both takeover and failure hazard drop significantly with age. The decline in takeover hazard can be explained with Loderer, Stulz, and Waelchlis (2013) buggy whip makers hypothesis: Because old firms are comparatively well-managed and are affected by limited agency problems, on average, they offer little value added potential to acquirers. Failure hazard drops because to learning. The results are robust to various alternative interpretations and cannot be explained by unobserved heterogeneity. While hazards decline with age, they do not go to zero. This explains why, eventually, all listed firms disappear
Resumo:
One-dimensional dynamic computer simulation was employed to investigate the separation and migration order change of ketoconazole enantiomers at low pH in presence of increasing amounts of (2-hydroxypropyl)--cyclodextrin (OHP--CD). The 1:1 interaction of ketoconazole with the neutral cyclodextrin was simulated under real experimental conditions and by varying input parameters for complex mobilities and complexation constants. Simulation results obtained with experimentally determined apparent ionic mobilities, complex mobilities, and complexation constants were found to compare well with the calculated separation selectivity and experimental data. Simulation data revealed that the migration order of the ketoconazole enantiomers at low (OHP--CD) concentrations (i.e. below migration order inversion) is essentially determined by the difference in complexation constants and at high (OHP--CD) concentrations (i.e. above migration order inversion) by the difference in complex mobilities. Furthermore, simulations with complex mobilities set to zero provided data that mimic migration order and separation with the chiral selector being immobilized. For the studied CEC configuration, no migration order inversion is predicted and separations are shown to be quicker and electrophoretic transport reduced in comparison to migration in free solution. The presented data illustrate that dynamic computer simulation is a valuable tool to study electrokinetic migration and separations of enantiomers in presence of a complexing agent.
Resumo:
We study the spectral properties of the two-dimensional Dirac operator on bounded domains together with the appropriate boundary conditions which provide a (continuous) model for graphene nanoribbons. These are of two types, namely, the so-called armchair and zigzag boundary conditions, depending on the line along which the material was cut. In the former case, we show that the spectrum behaves in what might be called a classical way; while in the latter, we prove the existence of a sequence of finite multiplicity eigenvalues converging to zero and which correspond to edge states.