48 resultados para Asymptotic Formulas
Resumo:
BACKGROUND/AIMS: Supplementation with certain probiotics can improve gut microbial flora and immune function but should not have adverse effects. This study aimed to assess the risk of D-lactate accumulation and subsequent metabolic acidosis in infants fed on formula containing Lactobacillus johnsonii (La1). METHODS: In the framework of a double-blind, randomized controlled trial enrolling 71 infants aged 4-5 months, morning urine samples were collected before and 4 weeks after being fed formulas with or without La1 (1 x 10(8)/g powder) or being breastfed. Urinary D- and L-lactate concentrations were assayed by enzymatic, fluorimetric methods and excretion was normalized per mol creatinine. RESULTS: At baseline, no significant differences in urinary D-/L-lactate excretion among the formula-fed and breastfed groups were found. After 4 weeks, D-lactate excretion did not differ between the two formula groups, but was higher in both formula groups than in breastfed infants. In all infants receiving La1, urinary D-lactate concentrations remained within the concentration ranges of age-matched healthy infants which had been determined in an earlier study using the same analytical method. Urinary L-lactate also did not vary over time or among groups. CONCLUSIONS: Supplementation of La1 to formula did not affect urinary lactate excretion and there is no evidence of an increased risk of lactic acidosis.
Resumo:
Introduction ICM+ software encapsulates our 20 years' experience in brain monitoring. It collects data from a variety of bedside monitors and produces time trends of parameters defi ned using confi gurable mathematical formulae. To date it is being used in nearly 40 clinical research centres worldwide. We present its application for continuous monitoring of cerebral autoregulation using near-infrared spectroscopy (NIRS). Methods Data from multiple bedside monitors are processed by ICM+ in real time using a large selection of signal processing methods. These include various time and frequency domain analysis functions as well as fully customisable digital fi lters. The fi nal results are displayed in a variety of ways including simple time trends, as well as time window based histograms, cross histograms, correlations, and so forth. All this allows complex information from bedside monitors to be summarized in a concise fashion and presented to medical and nursing staff in a simple way that alerts them to the development of various pathological processes. Results One hundred and fi fty patients monitored continuously with NIRS, arterial blood pressure (ABP) and intracranial pressure (ICP), where available, were included in this study. There were 40 severely headinjured adult patients, 27 SAH patients (NCCU, Cambridge); 60 patients undergoing cardiopulmonary bypass (John Hopkins Hospital, Baltimore) and 23 patients with sepsis (University Hospital, Basel). In addition, MCA fl ow velocity (FV) was monitored intermittently using transcranial Doppler. FV-derived and ICP-derived pressure reactivity indices (PRx, Mx), as well as NIRS-derived reactivity indices (Cox, Tox, Thx) were calculated and showed signifi cant correlation with each other in all cohorts. Errorbar charts showing reactivity index PRx versus CPP (optimal CPP chart) as well as similar curves for NIRS indices versus CPP and ABP were also demonstrated. Conclusions ICM+ software is proving to be a very useful tool for enhancing the battery of available means for monitoring cerebral vasoreactivity and potentially facilitating autoregulation guided therapy. Complexity of data analysis is also hidden inside loadable profi les, thus allowing investigators to take full advantage of validated protocols including advanced processing formulas.
Resumo:
An Actively Heated Fiber Optics (AHFO) method to estimate soil moisture is tested and the analysis technique improved on. The measurements were performed in a lysimeter uniformly packed with loam soil with variable water content profiles. In the first meter of the soil profi le, 30 m of fiber optic cable were installed in a 12 loops coil. The metal sheath armoring the fiber cable was used as an electrical resistance heater to generate a heat pulse, and the soil response was monitored with a Distributed Temperature Sensing (DTS) system. We study the cooling following three continuous heat pulses of 120 s at 36 W m(-1) by means of long-time approximation of radial heat conduction. The soil volumetric water contents were then inferred from the estimated thermal conductivities through a specifically calibrated model relating thermal conductivity and volumetric water content. To use the pre-asymptotic data we employed a time correction that allowed the volumetric water content to be estimated with a precision of 0.01-0.035 (m(3) m(-3)). A comparison of the AHFO measurements with soil-moisture measurements obtained with calibrated capacitance-based probes gave good agreement for wetter soils [discrepancy between the two methods was less than 0.04 (m(3) m(-3))]. In the shallow drier soils, the AHFO method underestimated the volumetric water content due to the longertime required for the temperature increment to become asymptotic in less thermally conductive media [discrepancy between the two methods was larger than 0.1 (m(3) m(-3))]. The present work suggests that future applications of the AHFO method should include longer heat pulses, that longer heating and cooling events are analyzed, and, temperature increments ideally be measured with higher frequency.
Resumo:
Robust estimators for accelerated failure time models with asymmetric (or symmetric) error distribution and censored observations are proposed. It is assumed that the error model belongs to a log-location-scale family of distributions and that the mean response is the parameter of interest. Since scale is a main component of mean, scale is not treated as a nuisance parameter. A three steps procedure is proposed. In the first step, an initial high breakdown point S estimate is computed. In the second step, observations that are unlikely under the estimated model are rejected or down weighted. Finally, a weighted maximum likelihood estimate is computed. To define the estimates, functions of censored residuals are replaced by their estimated conditional expectation given that the response is larger than the observed censored value. The rejection rule in the second step is based on an adaptive cut-off that, asymptotically, does not reject any observation when the data are generat ed according to the model. Therefore, the final estimate attains full efficiency at the model, with respect to the maximum likelihood estimate, while maintaining the breakdown point of the initial estimator. Asymptotic results are provided. The new procedure is evaluated with the help of Monte Carlo simulations. Two examples with real data are discussed.
Resumo:
Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy, Total Variation (TV)- based energies and more recently non-local means. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm or fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n2) and O(1/√ε), while existing techniques are in O(1/n2) and O(1/√ε). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.
Resumo:
Background: The combined serum creatinine (SCreat) and cystatin C (CysC) CKD-EPI formula constitutes a new advance for glomerular filtration rate (GFR) estimation in adults. Using inulin clearances (iGFRs), the revised SCreat and the combined Schwartz formulas, this study aims to evaluate the applicability of the combined CKD-EPI formula in children. Method: 201 iGFRs for 201 children were analyzed and divided by chronic kidney disease (CKD) stages (iGFRs ≥90 ml/min/1.73 m(2), 90 > iGFRs > 60, and iGFRs ≤59), and by age groups (<10, 10-15, and >15 years). Medians with 95% confidence intervals of bias, precision, and accuracies within 30% of the iGFRs, for all three formulas, were compared using the Wilcoxon signed-rank test. Results: For the entire cohort and for all CKD and age groups, medians of bias for the CKD-EPI formula were significantly higher (p < 0.001) and precision was significantly lower than the solely SCreat and the combined SCreat and CysC Schwartz formulas. We also found that using the CKD-EPI formula, bias decreased and accuracy increased while the child age group increased, with a better formula performance above 15 years of age. However, the CKD-EPI formula accuracy is 58% compared to 93 and 92% for the SCreat and combined Schwartz formulas in this adolescent group. Conclusions: The performance of the combined CKD-EPI formula improves in adolescence compared with younger ages. Nevertheless, the CKD-EPI formula performs more poorly than the SCreat and the combined Schwartz formula in pediatric population. © 2013 S. Karger AG, Basel.
Resumo:
Risk theory has been a very active research area over the last decades. The main objectives of the theory are to find adequate stochastic processes which can model the surplus of a (non-life) insurance company and to analyze the risk related quantities such as ruin time, ruin probability, expected discounted penalty function and expected discounted dividend/tax payments. The study of these ruin related quantities provides crucial information for actuaries and decision makers. This thesis consists of the study of four different insurance risk models which are essentially related. The ruin and related quantities are investigated by using different techniques, resulting in explicit or asymptotic expressions for the ruin time, the ruin probability, the expected discounted penalty function and the expected discounted tax payments. - La recherche en théorie du risque a été très dynamique au cours des dernières décennies. D'un point de vue théorique, les principaux objectifs sont de trouver des processus stochastiques adéquats permettant de modéliser le surplus d'une compagnie d'assurance non vie et d'analyser les mesures de risque, notamment le temps de ruine, la probabilité de ruine, l'espérance de la valeur actuelle de la fonction de pénalité et l'espérance de la valeur actuelle des dividendes et taxes. L'étude de ces mesures associées à la ruine fournit des informations cruciales pour les actuaires et les décideurs. Cette thèse consiste en l'étude des quatre différents modèles de risque d'assurance qui sont essentiellement liés. La ruine et les mesures qui y sont associées sont examinées à l'aide de différentes techniques, ce qui permet d'induire des expressions explicites ou asymptotiques du temps de ruine, de la probabilité de ruine, de l'espérance de la valeur actuelle de la fonction de pénalité et l'espérance de la valeur actuelle des dividendes et taxes.
Resumo:
Determination of fat-free mass (FFM) and fat mass (FM) is of considerable interest in the evaluation of nutritional status. In recent years, bioelectrical impedance analysis (BIA) has emerged as a simple, reproducible method used for the evaluation of FFM and FM, but the lack of reference values reduces its utility to evaluate nutritional status. The aim of this study was to determine reference values for FFM, FM, and %FM by BIA in a white population of healthy subjects, to observe the changes in these values with age, and to develop percentile distributions for these parameters. Whole-body resistance of 1838 healthy white men and 1555 women, aged 15-64 y, was determined by using four skin electrodes on the right hand and foot. FFM and FM were calculated according to formulas validated for the subject groups and analyzed for age decades. This is the first study to present BIA-determined age- and sex-specific percentiles for FFM, FM, and %FM for healthy subjects, aged 15-64 y. Mean FM and %FM increased progressively in men and after age 45 y in women. The results suggest that any weight gain noted with age is due to a gain in FM. In conclusion, the data presented as percentiles can serve as reference to evaluate the normality of body composition of healthy and ill subject groups at a given age.
Resumo:
Nutrition assessment is important during chronic respiratory insufficiency to evaluate the level of malnutrition or obesity and should include body composition measurements. The appreciation of fat-free and fat reserves in patients with chronic respiratory insufficiency can aid in designing an adapted nutritional support, e.g., nutritional support in malnutrition and food restriction in obesity. The purpose of the present study was to cross-validate fat-free and fat mass obtained by various bioelectric impedance (BIA) formulas with the fat-free and fat mass measured by dual-energy X-ray absorptiometry (DXA) and determine the formulas that are best suited to predict the fat-free and fat mass for a group of patients with severe chronic respiratory insufficiency. Seventy-five patients (15 women and 60 men) with chronic obstructive and restrictive respiratory insufficiency aged 45-86 y were included in this study. Body composition was calculated according to 13 different BIA formulas for women and 12 for men and compared with DXA. Because of the variability, calculated as 2 standard deviations, of +/- 5.0 kg fat-free mass for women and +/- 6.4 kg for men for the best predictive formula, the use of the various existing BIA formulas was considered not clinically relevant. Therefore disease-specific formulas for patients with chronic respiratory insufficiency should be developed to improve the prediction of fat-free and fat mass by BIA in these patients.
Resumo:
Depuis le séminaire H. Cartan de 1954-55, il est bien connu que l'on peut trouver des éléments de torsion arbitrairement grande dans l'homologie entière des espaces d'Eilenberg-MacLane K(G,n) où G est un groupe abélien non trivial et n>1. L'objectif majeur de ce travail est d'étendre ce résultat à des H-espaces possédant plus d'un groupe d'homotopie non trivial. Dans le but de contrôler précisément le résultat de H. Cartan, on commence par étudier la dualité entre l'homologie et la cohomologie des espaces d'Eilenberg-MacLane 2-locaux de type fini. On parvient ainsi à raffiner quelques résultats qui découlent des calculs de H. Cartan. Le résultat principal de ce travail peut être formulé comme suit. Soit X un H-espace ne possédant que deux groupes d'homotopie non triviaux, tous deux finis et de 2-torsion. Alors X n'admet pas d'exposant pour son groupe gradué d'homologie entière réduite. On construit une large classe d'espaces pour laquelle ce résultat n'est qu'une conséquence d'une caractéristique topologique, à savoir l'existence d'un rétract faible X K(G,n) pour un certain groupe abélien G et n>1. On généralise également notre résultat principal à des espaces plus compliqués en utilisant la suite spectrale d'Eilenberg-Moore ainsi que des méthodes analytiques faisant apparaître les nombres de Betti et leur comportement asymptotique. Finalement, on conjecture que les espaces qui ne possédent qu'un nombre fini de groupes d'homotopie non triviaux n'admettent pas d'exposant homologique. Ce travail contient par ailleurs la présentation de la « machine d'Eilenberg-MacLane », un programme C++ conçu pour calculer explicitement les groupes d'homologie entière des espaces d'Eilenberg-MacLane. <br/><br/>By the work of H. Cartan, it is well known that one can find elements of arbitrarilly high torsion in the integral (co)homology groups of an Eilenberg-MacLane space K(G,n), where G is a non-trivial abelian group and n>1. The main goal of this work is to extend this result to H-spaces having more than one non-trivial homotopy groups. In order to have an accurate hold on H. Cartan's result, we start by studying the duality between homology and cohomology of 2-local Eilenberg-MacLane spaces of finite type. This leads us to some improvements of H. Cartan's methods in this particular case. Our main result can be stated as follows. Let X be an H-space with two non-vanishing finite 2-torsion homotopy groups. Then X does not admit any exponent for its reduced integral graded (co)homology group. We construct a wide class of examples for which this result is a simple consequence of a topological feature, namely the existence of a weak retract X K(G,n) for some abelian group G and n>1. We also generalize our main result to more complicated stable two stage Postnikov systems, using the Eilenberg-Moore spectral sequence and analytic methods involving Betti numbers and their asymptotic behaviour. Finally, we investigate some guesses on the non-existence of homology exponents for finite Postnikov towers. We conjecture that Postnikov pieces do not admit any (co)homology exponent. This work also includes the presentation of the "Eilenberg-MacLane machine", a C++ program designed to compute explicitely all integral homology groups of Eilenberg-MacLane spaces. <br/><br/>Il est toujours difficile pour un mathématicien de parler de son travail. La difficulté réside dans le fait que les objets qu'il étudie sont abstraits. On rencontre assez rarement un espace vectoriel, une catégorie abélienne ou une transformée de Laplace au coin de la rue ! Cependant, même si les objets mathématiques sont difficiles à cerner pour un non-mathématicien, les méthodes pour les étudier sont essentiellement les mêmes que celles utilisées dans les autres disciplines scientifiques. On décortique les objets complexes en composantes plus simples à étudier. On dresse la liste des propriétés des objets mathématiques, puis on les classe en formant des familles d'objets partageant un caractère commun. On cherche des façons différentes, mais équivalentes, de formuler un problème. Etc. Mon travail concerne le domaine mathématique de la topologie algébrique. Le but ultime de cette discipline est de parvenir à classifier tous les espaces topologiques en faisant usage de l'algèbre. Cette activité est comparable à celle d'un ornithologue (topologue) qui étudierait les oiseaux (les espaces topologiques) par exemple à l'aide de jumelles (l'algèbre). S'il voit un oiseau de petite taille, arboricole, chanteur et bâtisseur de nids, pourvu de pattes à quatre doigts, dont trois en avant et un, muni d'une forte griffe, en arrière, alors il en déduira à coup sûr que c'est un passereau. Il lui restera encore à déterminer si c'est un moineau, un merle ou un rossignol. Considérons ci-dessous quelques exemples d'espaces topologiques: a) un cube creux, b) une sphère et c) un tore creux (c.-à-d. une chambre à air). a) b) c) Si toute personne normalement constituée perçoit ici trois figures différentes, le topologue, lui, n'en voit que deux ! De son point de vue, le cube et la sphère ne sont pas différents puisque ils sont homéomorphes: on peut transformer l'un en l'autre de façon continue (il suffirait de souffler dans le cube pour obtenir la sphère). Par contre, la sphère et le tore ne sont pas homéomorphes: triturez la sphère de toutes les façons (sans la déchirer), jamais vous n'obtiendrez le tore. Il existe un infinité d'espaces topologiques et, contrairement à ce que l'on serait naïvement tenté de croire, déterminer si deux d'entre eux sont homéomorphes est très difficile en général. Pour essayer de résoudre ce problème, les topologues ont eu l'idée de faire intervenir l'algèbre dans leurs raisonnements. Ce fut la naissance de la théorie de l'homotopie. Il s'agit, suivant une recette bien particulière, d'associer à tout espace topologique une infinité de ce que les algébristes appellent des groupes. Les groupes ainsi obtenus sont appelés groupes d'homotopie de l'espace topologique. Les mathématiciens ont commencé par montrer que deux espaces topologiques qui sont homéomorphes (par exemple le cube et la sphère) ont les même groupes d'homotopie. On parle alors d'invariants (les groupes d'homotopie sont bien invariants relativement à des espaces topologiques qui sont homéomorphes). Par conséquent, deux espaces topologiques qui n'ont pas les mêmes groupes d'homotopie ne peuvent en aucun cas être homéomorphes. C'est là un excellent moyen de classer les espaces topologiques (pensez à l'ornithologue qui observe les pattes des oiseaux pour déterminer s'il a affaire à un passereau ou non). Mon travail porte sur les espaces topologiques qui n'ont qu'un nombre fini de groupes d'homotopie non nuls. De tels espaces sont appelés des tours de Postnikov finies. On y étudie leurs groupes de cohomologie entière, une autre famille d'invariants, à l'instar des groupes d'homotopie. On mesure d'une certaine manière la taille d'un groupe de cohomologie à l'aide de la notion d'exposant; ainsi, un groupe de cohomologie possédant un exposant est relativement petit. L'un des résultats principaux de ce travail porte sur une étude de la taille des groupes de cohomologie des tours de Postnikov finies. Il s'agit du théorème suivant: un H-espace topologique 1-connexe 2-local et de type fini qui ne possède qu'un ou deux groupes d'homotopie non nuls n'a pas d'exposant pour son groupe gradué de cohomologie entière réduite. S'il fallait interpréter qualitativement ce résultat, on pourrait dire que plus un espace est petit du point de vue de la cohomologie (c.-à-d. s'il possède un exposant cohomologique), plus il est intéressant du point de vue de l'homotopie (c.-à-d. il aura plus de deux groupes d'homotopie non nuls). Il ressort de mon travail que de tels espaces sont très intéressants dans le sens où ils peuvent avoir une infinité de groupes d'homotopie non nuls. Jean-Pierre Serre, médaillé Fields en 1954, a montré que toutes les sphères de dimension >1 ont une infinité de groupes d'homotopie non nuls. Des espaces avec un exposant cohomologique aux sphères, il n'y a qu'un pas à franchir...
Resumo:
The paper is motivated by the valuation problem of guaranteed minimum death benefits in various equity-linked products. At the time of death, a benefit payment is due. It may depend not only on the price of a stock or stock fund at that time, but also on prior prices. The problem is to calculate the expected discounted value of the benefit payment. Because the distribution of the time of death can be approximated by a combination of exponential distributions, it suffices to solve the problem for an exponentially distributed time of death. The stock price process is assumed to be the exponential of a Brownian motion plus an independent compound Poisson process whose upward and downward jumps are modeled by combinations (or mixtures) of exponential distributions. Results for exponential stopping of a Lévy process are used to derive a series of closed-form formulas for call, put, lookback, and barrier options, dynamic fund protection, and dynamic withdrawal benefit with guarantee. We also discuss how barrier options can be used to model lapses and surrenders.
Resumo:
We consider robust parametric procedures for univariate discrete distributions, focusing on the negative binomial model. The procedures are based on three steps: ?First, a very robust, but possibly inefficient, estimate of the model parameters is computed. ?Second, this initial model is used to identify outliers, which are then removed from the sample. ?Third, a corrected maximum likelihood estimator is computed with the remaining observations. The final estimate inherits the breakdown point (bdp) of the initial one and its efficiency can be significantly higher. Analogous procedures were proposed in [1], [2], [5] for the continuous case. A comparison of the asymptotic bias of various estimates under point contamination points out the minimum Neyman's chi-squared disparity estimate as a good choice for the initial step. Various minimum disparity estimators were explored by Lindsay [4], who showed that the minimum Neyman's chi-squared estimate has a 50% bdp under point contamination; in addition, it is asymptotically fully efficient at the model. However, the finite sample efficiency of this estimate under the uncontaminated negative binomial model is usually much lower than 100% and the bias can be strong. We show that its performance can then be greatly improved using the three step procedure outlined above. In addition, we compare the final estimate with the procedure described in
Resumo:
BACKGROUND & AIMS: The standard liver volume (SLV) is widely used in liver surgery, especially for living donor liver transplantation (LDLT). All the reported formulas for SLV use body surface area or body weight, which can be influenced strongly by the general condition of the patient. METHODS: We analyzed the liver volumes of 180 Japanese donor candidates and 160 Swiss patients with normal livers to develop a new formula. The dataset was randomly divided into two subsets, the test and validation sample, stratified by race. The new formula was validated using 50 LDLT recipients. RESULTS: Without using body weight-related variables, age, thoracic width measured using computed tomography, and race independently predicted the total liver volume (TLV). A new formula: 203.3-(3.61×age)+(58.7×thoracic width)-(463.7×race [1=Asian, 0=Caucasian]), most accurately predicted the TLV in the validation dataset as compared with any other formulas. The graft volume for LDLT was correlated with the postoperative prothrombin time, and the graft volume/SLV ratio calculated using the new formula was significantly better correlated with the postoperative prothrombin time than the graft volume/SLV ratio calculated using the other formulas or the graft volume/body weight ratio. CONCLUSIONS: The new formula derived using the age, thoracic width and race predicted both the TLV in the healthy patient group and the SLV in LDLT recipients more accurately than any other previously reported formulas.