941 resultados para fixed point method


Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is not clear what a system for evidence-based common knowledge should look like if common knowledge is treated as a greatest fixed point. This paper is a preliminary step towards such a system. We argue that the standard induction rule is not well suited to axiomatize evidence-based common knowledge. As an alternative, we study two different deductive systems for the logic of common knowledge. The first system makes use of an induction axiom whereas the second one is based on co-inductive proof theory. We show the soundness and completeness for both systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We explore the nature of the bulk transition observed at strong coupling in the SU(3) gauge theory with Nf=12 fermions in the fundamental representation. The transition separates a weak coupling chirally symmetric phase from a strong coupling chirally broken phase and is compatible with the scenario where conformality is restored by increasing the flavour content of a non abelian gauge theory. We explore the intriguing possibility that the observed bulk transition is associated with the occurrence of an ultraviolet fixed point (UVFP) at strong coupling, where a new theory emerges in the continuum.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main objective of this paper is to discuss various aspects of implementing a specific intrusion-detection scheme on a micro-computer system using fixed-point arithmetic. The proposed scheme is suitable for detecting intruder stimuli which are in the form of transient signals. It consists of two stages: an adaptive digital predictor and an adaptive threshold detection algorithm. Experimental results involving data acquired via field experiments are also included.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate the interplay of smoothness and monotonicity assumptions when estimating a density from a sample of observations. The nonparametric maximum likelihood estimator of a decreasing density on the positive half line attains a rate of convergence at a fixed point if the density has a negative derivative. The same rate is obtained by a kernel estimator, but the limit distributions are different. If the density is both differentiable and known to be monotone, then a third estimator is obtained by isotonization of a kernel estimator. We show that this again attains the rate of convergence and compare the limit distributors of the three types of estimators. It is shown that both isotonization and smoothing lead to a more concentrated limit distribution and we study the dependence on the proportionality constant in the bandwidth. We also show that isotonization does not change the limit behavior of a kernel estimator with a larger bandwidth, in the case that the density is known to have more than one derivative.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The amount and type of ground cover is an important characteristic to measure when collecting soil disturbance monitoring data after a timber harvest. Estimates of ground cover and bare soil can be used for tracking changes in invasive species, plant growth and regeneration, woody debris loadings, and the risk of surface water runoff and soil erosion. A new method of assessing ground cover and soil disturbance was recently published by the U.S. Forest Service, the Forest Soil Disturbance Monitoring Protocol (FSDMP). This protocol uses the frequency of cover types in small circular (15cm) plots to compare ground surface in pre- and post-harvest condition. While both frequency and percent cover are common methods of describing vegetation, frequency has rarely been used to measure ground surface cover. In this study, three methods for assessing ground cover percent (step-point, 15cm dia. circular and 1x5m visual plot estimates) were compared to the FSDMP frequency method. Results show that the FSDMP method provides significantly higher estimates of ground surface condition for most soil cover types, except coarse wood. The three cover methods had similar estimates for most cover values. The FSDMP method also produced the highest value when bare soil estimates were used to model erosion risk. In a person-hour analysis, estimating ground cover percent in 15cm dia. plots required the least sampling time, and provided standard errors similar to the other cover estimates even at low sampling intensities (n=18). If ground cover estimates are desired in soil monitoring, then a small plot size (15cm dia. circle), or a step-point method can provide a more accurate estimate in less time than the current FSDMP method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This doctoral thesis presents the computational work and synthesis with experiments for internal (tube and channel geometries) as well as external (flow of a pure vapor over a horizontal plate) condensing flows. The computational work obtains accurate numerical simulations of the full two dimensional governing equations for steady and unsteady condensing flows in gravity/0g environments. This doctoral work investigates flow features, flow regimes, attainability issues, stability issues, and responses to boundary fluctuations for condensing flows in different flow situations. This research finds new features of unsteady solutions of condensing flows; reveals interesting differences in gravity and shear driven situations; and discovers novel boundary condition sensitivities of shear driven internal condensing flows. Synthesis of computational and experimental results presented here for gravity driven in-tube flows lays framework for the future two-phase component analysis in any thermal system. It is shown for both gravity and shear driven internal condensing flows that steady governing equations have unique solutions for given inlet pressure, given inlet vapor mass flow rate, and fixed cooling method for condensing surface. But unsteady equations of shear driven internal condensing flows can yield different “quasi-steady” solutions based on different specifications of exit pressure (equivalently exit mass flow rate) concurrent to the inlet pressure specification. This thesis presents a novel categorization of internal condensing flows based on their sensitivity to concurrently applied boundary (inlet and exit) conditions. The computational investigations of an external shear driven flow of vapor condensing over a horizontal plate show limits of applicability of the analytical solution. Simulations for this external condensing flow discuss its stability issues and throw light on flow regime transitions because of ever-present bottom wall vibrations. It is identified that laminar to turbulent transition for these flows can get affected by ever present bottom wall vibrations. Detailed investigations of dynamic stability analysis of this shear driven external condensing flow result in the introduction of a new variable, which characterizes the ratio of strength of the underlying stabilizing attractor to that of destabilizing vibrations. Besides development of CFD tools and computational algorithms, direct application of research done for this thesis is in effective prediction and design of two-phase components in thermal systems used in different applications. Some of the important internal condensing flow results about sensitivities to boundary fluctuations are also expected to be applicable to flow boiling phenomenon. Novel flow sensitivities discovered through this research, if employed effectively after system level analysis, will result in the development of better control strategies in ground and space based two-phase thermal systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Using a convenient and fast HPLC procedure we determined serum concentrations of the fungistatic agent 5-fluorocytosine (5-FC) in 375 samples from 60 patients treated with this drug. The mean trough concentration (n = 127) was 64.3 mg/l (range: 11.8-208.0 mg/l), the mean peak concentration (n = 122) was 99.9 mg/l (range: 25.6-263.8 mg/l), the mean nonpeak/nontrough concentration (n = 126) was 80.1 mg/l (range: 10.5-268.0 mg/l). Totally 134 (35.7%) samples were outside the therapeutic range (25-100 mg/l), 108 (28.8%) being too high, 26 (6.9%) being too low. Forty-four (73%) patients showed 5-FC serum concentrations outside the therapeutic range at least once during the treatment course. In a prospective study we performed 65 dosage predictions on 30 patients by use of a 3-point method previously developed for aminoglycoside dosage adaptation. The mean absolute prediction error of the dosage adaptation was +0.7 mg/l (range: -26.0 to +28.0 mg/l). The root mean square prediction error was 10.7 mg/l. The mean predicted concentration (65.3 mg/l) agreed very well with the mean measured concentration (64.6 mg/l). The frequency distribution of 5-FC serum concentrations indicates that 5-FC monitoring is important. The applied pharmacokinetic method allows individual adaptations of 5-FC dosage with a clinically acceptable prediction error.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rationale: Focal onset epileptic seizures are due to abnormal interactions between distributed brain areas. By estimating the cross-correlation matrix of multi-site intra-cerebral EEG recordings (iEEG), one can quantify these interactions. To assess the topology of the underlying functional network, the binary connectivity matrix has to be derived from the cross-correlation matrix by use of a threshold. Classically, a unique threshold is used that constrains the topology [1]. Our method aims to set the threshold in a data-driven way by separating genuine from random cross-correlation. We compare our approach to the fixed threshold method and study the dynamics of the functional topology. Methods: We investigate the iEEG of patients suffering from focal onset seizures who underwent evaluation for the possibility of surgery. The equal-time cross-correlation matrices are evaluated using a sliding time window. We then compare 3 approaches assessing the corresponding binary networks. For each time window: * Our parameter-free method derives from the cross-correlation strength matrix (CCS)[2]. It aims at disentangling genuine from random correlations (due to finite length and varying frequency content of the signals). In practice, a threshold is evaluated for each pair of channels independently, in a data-driven way. * The fixed mean degree (FMD) uses a unique threshold on the whole connectivity matrix so as to ensure a user defined mean degree. * The varying mean degree (VMD) uses the mean degree of the CCS network to set a unique threshold for the entire connectivity matrix. * Finally, the connectivity (c), connectedness (given by k, the number of disconnected sub-networks), mean global and local efficiencies (Eg, El, resp.) are computed from FMD, CCS, VMD, and their corresponding random and lattice networks. Results: Compared to FMD and VMD, CCS networks present: *topologies that are different in terms of c, k, Eg and El. *from the pre-ictal to the ictal and then post-ictal period, topological features time courses that are more stable within a period, and more contrasted from one period to the next. For CCS, pre-ictal connectivity is low, increases to a high level during the seizure, then decreases at offset. k shows a ‘‘U-curve’’ underlining the synchronization of all electrodes during the seizure. Eg and El time courses fluctuate between the corresponding random and lattice networks values in a reproducible manner. Conclusions: The definition of a data-driven threshold provides new insights into the topology of the epileptic functional networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We construct holomorphic families of proper holomorphic embeddings of \mathbb {C}^{k} into \mathbb {C}^{n} (0\textless k\textless n-1), so that for any two different parameters in the family, no holomorphic automorphism of \mathbb {C}^{n} can map the image of the corresponding two embeddings onto each other. As an application to the study of the group of holomorphic automorphisms of \mathbb {C}^{n}, we derive the existence of families of holomorphic \mathbb {C}^{*}-actions on \mathbb {C}^{n} (n\ge5) so that different actions in the family are not conjugate. This result is surprising in view of the long-standing holomorphic linearization problem, which, in particular, asked whether there would be more than one conjugacy class of \mathbb {C}^{*}-actions on \mathbb {C}^{n} (with prescribed linear part at a fixed point).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE Therapeutic drug monitoring of patients receiving once daily aminoglycoside therapy can be performed using pharmacokinetic (PK) formulas or Bayesian calculations. While these methods produced comparable results, their performance has never been checked against full PK profiles. We performed a PK study in order to compare both methods and to determine the best time-points to estimate AUC0-24 and peak concentrations (C max). METHODS We obtained full PK profiles in 14 patients receiving a once daily aminoglycoside therapy. PK parameters were calculated with PKSolver using non-compartmental methods. The calculated PK parameters were then compared with parameters estimated using an algorithm based on two serum concentrations (two-point method) or the software TCIWorks (Bayesian method). RESULTS For tobramycin and gentamicin, AUC0-24 and C max could be reliably estimated using a first serum concentration obtained at 1 h and a second one between 8 and 10 h after start of the infusion. The two-point and the Bayesian method produced similar results. For amikacin, AUC0-24 could reliably be estimated by both methods. C max was underestimated by 10-20% by the two-point method and by up to 30% with a large variation by the Bayesian method. CONCLUSIONS The ideal time-points for therapeutic drug monitoring of once daily administered aminoglycosides are 1 h after start of a 30-min infusion for the first time-point and 8-10 h after start of the infusion for the second time-point. Duration of the infusion and accurate registration of the time-points of blood drawing are essential for obtaining precise predictions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new research project has, quite recently, been launched to clarify how different, from systems in second order number theory extending ACA 0, those in second order set theory extending NBG (as well as those in n + 3-th order number theory extending the so-called Bernays−Gödel expansion of full n + 2-order number theory etc.) are. In this article, we establish the equivalence between Δ10\bf-LFP and Δ10\bf-FP, which assert the existence of a least and of a (not necessarily least) fixed point, respectively, for positive elementary operators (or between Δn+20\bf-LFP and Δn+20\bf-FP). Our proof also shows the equivalence between ID 1 and ^ID1, both of which are defined in the standard way but with the starting theory PA replaced by ZFC (or full n + 2-th order number theory with global well-ordering).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Paper 1: Pilot study of Swiss firms Abstract Using a fixed effects approach, we investigate whether the presence of specific individuals on Swiss firms’ boards affects firm performance and the policy choices they make. We find evidence for a substantial impact of these directors’ presence on their firms. Moreover, the director effects are correlated across policies and performance measures but uncorrelated to the directors’ background. We find these results interesting but conclude that they should to be substantiated on a dataset that is larger and better understood by researchers. Also, further tests are required to rule out methodological concerns. Paper 2: Evidence from the S&P 1,500 Abstract We ask whether directors on corporate boards contribute to firm performance as individuals. From the universe of the S&P 1,500 firms since 1996 we track 2,062 directors who serve on multiple boards over extended periods of time. Our initial findings suggest that the presence of these directors is associated with substantial performance shifts (director fixed effects). Closer examination shows that these effects are statistical artifacts and we conclude that directors are largely fungible. Moreover, we contribute to the discussion of the fixed effects method. In particular, we highlight that the selection of the randomization method is pivotal when generating placebo benchmarks. Paper 3: Robustness, statistical power, and important directors Abstract This article provides a better understanding of Senn’s (2014) findings: The outcome that individual directors are unrelated to firm performance proves robust against different estimation models and testing strategies. By looking at CEOs, the statistical power of the placebo benchmarking test is evaluated. We find that only the stronger tests are able to detect CEO fixed effects. However, these tests are not suitable to analyze directors. The suitable tests would detect director effects if the inter quartile range of the true effects amounted to 3 percentage points ROA. As Senn (2014) finds no such effects for outside directors in general, we focus on groups of particularly important directors (e.g., COBs, non-busy directors, successful directors). Overall, our evidence suggests that the members of these groups are not individually associated with firm performance either. Thus, we confirm that individual directors are largely fungible. If the individual has an effect on performance, it is of small magnitude.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study the relativistic version of the Schrödinger equation for a point particle in one dimension with the potential of the first derivative of the delta function. The momentum cutoff regularization is used to study the bound state and scattering states. The initial calculations show that the reciprocal of the bare coupling constant is ultraviolet divergent, and the resultant expression cannot be renormalized in the usual sense, where the divergent terms can just be omitted. Therefore, a general procedure has been developed to derive different physical properties of the system. The procedure is used first in the nonrelativistic case for the purpose of clarification and comparisons. For the relativistic case, the results show that this system behaves exactly like the delta function potential, which means that this system also shares features with quantum filed theories, like being asymptotically free. In addition, in the massless limit, it undergoes dimensional transmutation, and it possesses an infrared conformal fixed point. The comparison of the solution with the relativistic delta function potential solution shows evidence of universality.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present new algorithms for M-estimators of multivariate scatter and location and for symmetrized M-estimators of multivariate scatter. The new algorithms are considerably faster than currently used fixed-point and related algorithms. The main idea is to utilize a second order Taylor expansion of the target functional and to devise a partial Newton-Raphson procedure. In connection with symmetrized M-estimators we work with incomplete U-statistics to accelerate our procedures initially.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We calculate the anomalous dimensions of operators with large global charge J in certain strongly coupled conformal field theories in three dimensions, such as the O(2) model and the supersymmetric fixed point with a single chiral superfield and a W = Φ3 superpotential. Working in a 1/J expansion, we find that the large-J sector of both examples is controlled by a conformally invariant effective Lagrangian for a Goldstone boson of the global symmetry. For both these theories, we find that the lowest state with charge J is always a scalar operator whose dimension ΔJ satisfies the sum rule J2ΔJ−(J22+J4+316)ΔJ−1−(J22+J4+316)ΔJ+1=0.04067 up to corrections that vanish at large J . The spectrum of low-lying excited states is also calculable explcitly: for example, the second-lowest primary operator has spin two and dimension ΔJ+3√. In the supersymmetric case, the dimensions of all half-integer-spin operators lie above the dimensions of the integer-spin operators by a gap of order J+12. The propagation speeds of the Goldstone waves and heavy fermions are 12√ and ±12 times the speed of light, respectively. These values, including the negative one, are necessary for the consistent realization of the superconformal symmetry at large J.