31 resultados para Univalent polynomial
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
In this study we investigated whether synesthetic color experiences have similar effects as real colors in cognitive conflict adaptation. We tested 24 synesthetes and two yoke-matched control groups in a task-switching experiment that involved regular switches between three simple decision tasks (a color decision, a form decision, and a size decision). In most of the trials the stimuli were univalent, that is, specific for each task. However, occasionally, black graphemes were presented for the size decisions and we tested whether they would trigger synesthetic color experiences and thus, turn them into bivalent stimuli. The results confirmed this expectation. We were also interested in their effect for subsequent performance (i.e., the bivalency effect). The results showed that for synesthetic colors the bivalency effect was not as pronounced as for real colors. The latter result may be related to differences between synesthetes and controls in coping with color conflict.
Resumo:
Neurally adjusted ventilatory assist (NAVA) delivers airway pressure (P(aw)) in proportion to the electrical activity of the diaphragm (EAdi) using an adjustable proportionality constant (NAVA level, cm·H(2)O/μV). During systematic increases in the NAVA level, feedback-controlled down-regulation of the EAdi results in a characteristic two-phased response in P(aw) and tidal volume (Vt). The transition from the 1st to the 2nd response phase allows identification of adequate unloading of the respiratory muscles with NAVA (NAVA(AL)). We aimed to develop and validate a mathematical algorithm to identify NAVA(AL). P(aw), Vt, and EAdi were recorded while systematically increasing the NAVA level in 19 adult patients. In a multistep approach, inspiratory P(aw) peaks were first identified by dividing the EAdi into inspiratory portions using Gaussian mixture modeling. Two polynomials were then fitted onto the curves of both P(aw) peaks and Vt. The beginning of the P(aw) and Vt plateaus, and thus NAVA(AL), was identified at the minimum of squared polynomial derivative and polynomial fitting errors. A graphical user interface was developed in the Matlab computing environment. Median NAVA(AL) visually estimated by 18 independent physicians was 2.7 (range 0.4 to 5.8) cm·H(2)O/μV and identified by our model was 2.6 (range 0.6 to 5.0) cm·H(2)O/μV. NAVA(AL) identified by our model was below the range of visually estimated NAVA(AL) in two instances and was above in one instance. We conclude that our model identifies NAVA(AL) in most instances with acceptable accuracy for application in clinical routine and research.
Resumo:
OBJECTIVES: To estimate changes in coronary risk factors and their implications for coronary heart disease (CHD) rates in men starting highly active antiretroviral therapy (HAART). METHODS: Men participating in the Swiss HIV Cohort Study with measurements of coronary risk factors both before and up to 3 years after starting HAART were identified. Fractional polynomial regression was used to graph associations between risk factors and time on HAART. Mean risk factor changes associated with starting HAART were estimated using multilevel models. A prognostic model was used to predict corresponding CHD rate ratios. RESULTS: Of 556 eligible men, 259 (47%) started a nonnucleoside reverse transcriptase inhibitor (NNRTI) and 297 a protease inhibitor (PI) based regimen. Levels of most risk factors increased sharply during the first 3 months on HAART, then more slowly. Increases were greater with PI- than NNRTI-based HAART for total cholesterol (1.18 vs. 0.98 mmol L(-1)), systolic blood pressure (3.6 vs. 0 mmHg) and BMI (1.04 vs. 0.55 kg m(2)) but not HDL cholesterol (0.24 vs. 0.32 mmol L(-1)) or glucose (1.02 vs. 1.03 mmol L(-1)). Predicted CHD rate ratios were 1.40 (95% CI 1.13-1.75) and 1.17 (0.95-1.47) for PI- and NNRTI-based HAART respectively. CONCLUSIONS: Coronary heart disease rates will increase in a majority of patients starting HAART: however the increases corresponding to typical changes in risk factors are relatively modest and could be offset by lifestyle changes.
Resumo:
In four experiments we investigated whether incidental task sequence learning occurs when no instructional task cues are available (i.e. with univalent stimuli). We manipulated task sequence by presenting three simple binary-choice tasks (colour, form or letter case decisions) in regular repeated or random order. Participants were required to use the same two response keys for each of the tasks. We manipulated response sequence by ordering the stimuli so as to produce either a regular or a random order of left versus right-hand key presses. When sequencing in both, or either, separate stream (i.e. task sequence and/or response sequence) was changed to random, only those participants who had processed both sequences together showed evidence of sequence learning in terms of significant response time disruption (Experiments 1-3). This effect disappeared when the sequences were uncorrelated (Experiment 4). The results indicate that only the correlated integration of task sequence and response sequence produced a reliable incidental learning effect. As this effect depends on the predictable ordering of stimulus categories, it suggests that task sequence learning is perceptual rather than conceptual in nature.
Resumo:
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Resumo:
Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation
Resumo:
When bivalent stimuli (i.e., stimuli with relevant features for two different tasks) occur occasionally among univalent stimuli, performance is slowed on subsequent univalent stimuli even if they have no overlapping stimulus features. This finding has been labeled the bivalency effect. It indexes an adjustment of cognitive control, but the underlying mechanism is not well understood yet. The purpose of the present study was to shed light on this question, using event-related potentials. We used a paradigm requiring predictable alternations between three tasks, with bivalent stimuli occasionally occurring on one task. The results revealed that the bivalency effect elicited a sustained parietal positivity and a frontal negativity, a neural signature that is typical for control processes implemented to resolve interference. We suggest that the bivalency effect reflects interference, which may be caused by episodic context binding.
Resumo:
We derive multiscale statistics for deconvolution in order to detect qualitative features of the unknown density. An important example covered within this framework is to test for local monotonicity on all scales simultaneously. We investigate the moderately ill-posed setting, where the Fourier transform of the error density in the deconvolution model is of polynomial decay. For multiscale testing, we consider a calibration, motivated by the modulus of continuity of Brownian motion. We investigate the performance of our results from both the theoretical and simulation based point of view. A major consequence of our work is that the detection of qualitative features of a density in a deconvolution problem is a doable task, although the minimax rates for pointwise estimation are very slow.
Resumo:
In this article, we develop the a priori and a posteriori error analysis of hp-version interior penalty discontinuous Galerkin finite element methods for strongly monotone quasi-Newtonian fluid flows in a bounded Lipschitz domain Ω ⊂ ℝd, d = 2, 3. In the latter case, computable upper and lower bounds on the error are derived in terms of a natural energy norm, which are explicit in the local mesh size and local polynomial degree of the approximating finite element method. A series of numerical experiments illustrate the performance of the proposed a posteriori error indicators within an automatic hp-adaptive refinement algorithm.
Resumo:
We introduce and analyze hp-version discontinuous Galerkin (dG) finite element methods for the numerical approximation of linear second-order elliptic boundary-value problems in three-dimensional polyhedral domains. To resolve possible corner-, edge- and corner-edge singularities, we consider hexahedral meshes that are geometrically and anisotropically refined toward the corresponding neighborhoods. Similarly, the local polynomial degrees are increased linearly and possibly anisotropically away from singularities. We design interior penalty hp-dG methods and prove that they are well-defined for problems with singular solutions and stable under the proposed hp-refinements. We establish (abstract) error bounds that will allow us to prove exponential rates of convergence in the second part of this work.
Resumo:
The goal of this paper is to establish exponential convergence of $hp$-version interior penalty (IP) discontinuous Galerkin (dG) finite element methods for the numerical approximation of linear second-order elliptic boundary-value problems with homogeneous Dirichlet boundary conditions and piecewise analytic data in three-dimensional polyhedral domains. More precisely, we shall analyze the convergence of the $hp$-IP dG methods considered in [D. Schötzau, C. Schwab, T. P. Wihler, SIAM J. Numer. Anal., 51 (2013), pp. 1610--1633] based on axiparallel $\sigma$-geometric anisotropic meshes and $\bm{s}$-linear anisotropic polynomial degree distributions.