948 resultados para Multiple delay estimation
Resumo:
Objective: Early treatment in sepsis may improve outcome. The aim of this study was to evaluate how the delay in starting resuscitation influences the severity of sepsis and the treatment needed to achieve hemodynamic stability. Design: Prospective, randomized, controlled experimental study. Setting: Experimental laboratory in a university hospital. Subjects: Thirty-two anesthetized and mechanically ventilated pigs. Interventions: Pigs were randomly assigned (n = 8 per group) to a nonseptic control group or one of three groups in which fecal peritonitis (peritoneal instillation of 2 g/kg autologous feces) was induced, and a 48-hr period of protocolized resuscitation started 6 (Delta T-6 hrs), 12 (Delta T-12 hrs), or 24 (Delta T-24 hrs) hrs later. The aim of this study was to evaluate the impact of delays in resuscitation on disease severity, need for resuscitation, and the development of sepsis-associated organ and mitochondrial dysfunction. Measurements and Main Results: Any delay in starting resuscitation was associated with progressive signs of hypovolemia and increased plasma levels of interleukin-6 and tumor necrosis factor-alpha prior to resuscitation. Delaying resuscitation increased cumulative net fluid balances (2.1 +/- 0.5 mL/kg/hr, 2.8 +/- 0.7 mL/kg/hr, and 3.2 +/- 1.5 mL/kg/hr, respectively, for groups.T-6 hrs, Delta T-12 hrs, and.T-24 hrs; p < .01) and norepinephrine requirements during the 48-hr resuscitation protocol (0.02 +/- 0.04 mu g/kg/min, 0.06 +/- 0.09 mu g/kg/min, and 0.13 +/- 0.15 mu g/kg/min; p = .059), decreased maximal brain mitochondrial complex II respiration (p = .048), and tended to increase mortality (p = .08). Muscle tissue adenosine triphosphate decreased in all groups (p < .01), with lowest values at the end in groups Delta T-12 hrs and.T-24 hrs. Conclusions: Increasing the delay between sepsis initiation and resuscitation increases disease severity, need for resuscitation, and sepsis-associated brain mitochondrial dysfunction. Our results support the concept of a critical window of opportunity in sepsis resuscitation. (Crit Care Med 2012; 40:2841-2849)
Resumo:
[EN] In this paper we present a variational technique for the reconstruction of 3D cylindrical surfaces. Roughly speaking by a cylindrical surface we mean a surface that can be parameterized using the projection on a cylinder in terms of two coordinates, representing the displacement and angle in a cylindrical coordinate system respectively. The starting point for our method is a set of different views of a cylindrical surface, as well as a precomputed disparity map estimation between pair of images. The proposed variational technique is based on an energy minimization where we balance on the one hand the regularity of the cylindrical function given by the distance of the surface points to cylinder axis, and on the other hand, the distance between the projection of the surface points on the images and the expected location following the precomputed disparity map estimation between pair of images. One interesting advantage of this approach is that we regularize the 3D surface by means of a bi-dimensio al minimization problem. We show some experimental results for large stereo sequences.
Resumo:
[EN] Indoor position estimation has become an attractive research topic due to growing interest in location-aware services. Nevertheless, satisfying solutions have not been found with the considerations of both accuracy and system complexity. From the perspective of lightweight mobile devices, they are extremely important characteristics, because both the processor power and energy availability are limited. Hence, an indoor localization system with high computational complexity can cause complete battery drain within a few hours. In our research, we use a data mining technique named boosting to develop a localization system based on multiple weighted decision trees to predict the device location, since it has high accuracy and low computational complexity.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
Images of a scene, static or dynamic, are generally acquired at different epochs from different viewpoints. They potentially gather information about the whole scene and its relative motion with respect to the acquisition device. Data from different (in the spatial or temporal domain) visual sources can be fused together to provide a unique consistent representation of the whole scene, even recovering the third dimension, permitting a more complete understanding of the scene content. Moreover, the pose of the acquisition device can be achieved by estimating the relative motion parameters linking different views, thus providing localization information for automatic guidance purposes. Image registration is based on the use of pattern recognition techniques to match among corresponding parts of different views of the acquired scene. Depending on hypotheses or prior information about the sensor model, the motion model and/or the scene model, this information can be used to estimate global or local geometrical mapping functions between different images or different parts of them. These mapping functions contain relative motion parameters between the scene and the sensor(s) and can be used to integrate accordingly informations coming from the different sources to build a wider or even augmented representation of the scene. Accordingly, for their scene reconstruction and pose estimation capabilities, nowadays image registration techniques from multiple views are increasingly stirring up the interest of the scientific and industrial community. Depending on the applicative domain, accuracy, robustness, and computational payload of the algorithms represent important issues to be addressed and generally a trade-off among them has to be reached. Moreover, on-line performance is desirable in order to guarantee the direct interaction of the vision device with human actors or control systems. This thesis follows a general research approach to cope with these issues, almost independently from the scene content, under the constraint of rigid motions. This approach has been motivated by the portability to very different domains as a very desirable property to achieve. A general image registration approach suitable for on-line applications has been devised and assessed through two challenging case studies in different applicative domains. The first case study regards scene reconstruction through on-line mosaicing of optical microscopy cell images acquired with non automated equipment, while moving manually the microscope holder. By registering the images the field of view of the microscope can be widened, preserving the resolution while reconstructing the whole cell culture and permitting the microscopist to interactively explore the cell culture. In the second case study, the registration of terrestrial satellite images acquired by a camera integral with the satellite is utilized to estimate its three-dimensional orientation from visual data, for automatic guidance purposes. Critical aspects of these applications are emphasized and the choices adopted are motivated accordingly. Results are discussed in view of promising future developments.
Resumo:
Introduction. Neutrophil Gelatinase-Associated Lipocalin (NGAL) belongs to the family of lipocalins and it is produced by several cell types, including renal tubular epithelium. In the kidney its production increases during acute damage and this is reflected by the increase in serum and urine levels. In animal studies and clinical trials, NGAL was found to be a sensitive and specific indicator of acute kidney injury (AKI). Purpose. The aim of this work was to investigate, in a prospective manner, whether urine NGAL can be used as a marker in preeclampsia, kidney transplantation, VLBI and diabetic nephropathy. Materials and methods. The study involved 44 consecutive patients who received renal transplantation; 18 women affected by preeclampsia (PE); a total of 55 infants weighing ≤1500 g and 80 patients with Type 1 diabetes. Results. A positive correlation was found between urinary NGAL and 24 hours proteinuria within the PE group. The detection of higher uNGAL values in case of severe PE, even in absence of statistical significance, confirms that these women suffer from an initial renal damage. In our population of VLBW infants, we found a positive correlation of uNGAL values at birth with differences in sCreat and eGFR values from birth to day 21, but no correlation was found between uNGAL values at birth and sCreat and eGFR at day 7. systolic an diastolic blood pressure decreased with increasing levels of uNGAL. The patients with uNGAL <25 ng/ml had significantly higher levels of systolic blood pressure compared with the patients with uNGAL >50 ng/ml ( p<0.005). Our results indicate the ability of NGAL to predict the delay in functional recovery of the graft. Conclusions. In acute renal pathology, urinary NGAL confirms to be a valuable predictive marker of the progress and status of acute injury.
Resumo:
In this thesis, we investigated the evaporation of sessile microdroplets on different solid substrates. Three major aspects were studied: the influence of surface hydrophilicity and heterogeneity on the evaporation dynamics for an insoluble solid substrate, the influence of external process parameters and intrinsic material properties on microstructuring of soluble polymer substrates and the influence of an increased area to volume ratio in a microfluidic capillary, when evaporation is hindered. In the first part, the evaporation dynamics of pure sessile water drops on smooth self-assembled monolayers (SAMs) of thiols or disulfides on gold on mica was studied. With increasing surface hydrophilicity the drop stayed pinned longer. Thus, the total evaporation time of a given initial drop volume was shorter, since the drop surface, through which the evaporation occurs, stays longer large. Usually, for a single drop the volume decreased linearly with t1.5, t being the evaporation time, for a diffusion-controlled evaporation process. However, when we measured the total evaporation time, ttot, for multiple droplets with different initial volumes, V0, we found a scaling of the form V0 = attotb. The more hydrophilic the substrate was, the more showed the scaling exponent a tendency to an increased value up to 1.6. This can be attributed to an increasing evaporation rate through a thin water layer in the vicinity of the drop. Under the assumption of a constant temperature at the substrate surface a cooling of the droplet and thus a decreased evaporation rate could be excluded as a reason for the different scaling exponent by simulations performed by F. Schönfeld at the IMM, Mainz. In contrast, for a hairy surface, made of dialkyldisulfide SAMs with different chain lengths and a 1:1 mixture of hydrophilic and hydrophobic end groups (hydroxy versus methyl group), the scaling exponent was found to be ~ 1.4. It increased to ~ 1.5 with increasing hydrophilicity. A reason for this observation can only be speculated: in the case of longer hydrophobic alkyl chains the formation of an air layer between substrate and surface might be favorable. Thus, the heat transport to the substrate might be reduced, leading to a stronger cooling and thus decreased evaporation rate. In the second part, the microstructuring of polystyrene surfaces by drops of toluene, a good solvent, was investigated. For this a novel deposition technique was developed, with which the drop can be deposited with a syringe. The polymer substrate is lying on a motorized table, which picks up the pendant drop by an upward motion until a liquid bridge is formed. A consecutive downward motion of the table after a variable delay, i.e. the contact time between drop and polymer, leads to the deposition of the droplet, which can evaporate. The resulting microstructure is investigated in dependence of the processes parameters, i.e. the approach and the retraction speed of the substrate and the delay between them, and in dependence of the intrinsic material properties, i.e. the molar mass and the type of the polymer/solvent system. The principal equivalence with the microstructuring by the ink-jet technique was demonstrated. For a high approach and retraction speed of 9 mm/s and no delay between them, a concave microtopology was observed. In agreement with the literature, this can be explained by a flow of solvent and the dissolved polymer to the rim of the pinned droplet, where polymer is accumulated. This effect is analogue to the well-known formation of ring-like stains after the evaporation of coffee drops (coffee-stain effect). With decreasing retraction speed down to 10 µm/s the resulting surface topology changes from concave to convex. This can be explained with the increasing dissolution of polymer into the solvent drop prior to the evaporation. If the polymer concentration is high enough, gelation occurs instead of a flow to the rim and the shape of the convex droplet is received. With increasing delay time from below 0 ms to 1s the depth of the concave microwells decreases from 4.6 µm to 3.2 µm. However, a convex surface topology could not be obtained, since for longer delay times the polymer sticks to the tip of the syringe. Thus, by changing the delay time a fine-tuning of the concave structure is accomplished, while by changing the retraction speed a principal change of the microtopolgy can be achieved. We attribute this to an additional flow inside the liquid bridge, which enhanced polymer dissolution. Even if the pendant drop is evaporating about 30 µm above the polymer surface without any contact (non-contact mode), concave structures were observed. Rim heights as high as 33 µm could be generated for exposure times of 20 min. The concave structure exclusively lay above the flat polymer surface outside the structure even after drying. This shows that toluene is taken up permanently. The increasing rim height, rh, with increasing exposure time to the solvent vapor obeys a diffusion law of rh = rh0 tn, with n in the range of 0.46 ~ 0.65. This hints at a non-Fickian swelling process. A detailed analysis showed that the rim height of the concave structure is modulated, unlike for the drop deposition. This is due to the local stress relaxation, which was initiated by the increasing toluene concentration in the extruded polymer surface. By altering the intrinsic material parameters i.e. the polymer molar mass and the polymer/solvent combination, several types of microstructures could be formed. With increasing molar mass from 20.9 kDa to 1.44 MDa the resulting microstructure changed from convex, to a structure with a dimple in the center, to concave, to finally an irregular structure. This observation can be explained if one assumes that the microstructuring is dominated by two opposing effects, a decreasing solubility with increasing polymer molar mass, but an increasing surface tension gradient leading to instabilities of Marangoni-type. Thus, a polymer with a low molar mass close or below the entanglement limit is subject to a high dissolution rate, which leads to fast gelation compared to the evaporation rate. This way a coffee-rim like effect is eliminated early and a convex structure results. For high molar masses the low dissolution rate and the low polymer diffusion might lead to increased surface tension gradients and a typical local pile-up of polymer is found. For intermediate polymer masses around 200 kDa, the dissolution and evaporation rate are comparable and the typical concave microtopology is found. This interpretation was supported by a quantitative estimation of the diffusion coefficient and the evaporation rate. For a different polymer/solvent system, polyethylmethacrylate (PEMA)/ethylacetate (EA), exclusively concave structures were found. Following the statements above this can be interpreted with a lower dissolution rate. At low molar masses the concentration of PEMA in EA most likely never reaches the gelation point. Thus, a concave instead of a convex structure occurs. At the end of this section, the optically properties of such microstructures for a potential application as microlenses are studied with laser scanning confocal microscopy. In the third part, the droplet was confined into a glass microcapillary to avoid evaporation. Since here, due to an increased area to volume ratio, the surface properties of the liquid and the solid walls became important, the influence of the surface hydrophilicity of the wall on the interfacial tension between two immiscible liquid slugs was investigated. For this a novel method for measuring the interfacial tension between the two liquids within the capillary was developed. This technique was demonstrated by measuring the interfacial tensions between slugs of pure water and standard solvents. For toluene, n-hexane and chloroform 36.2, 50.9 and 34.2 mN/m were measured at 20°C, which is in a good agreement with data from the literature. For a slug of hexane in contact with a slug of pure water containing ethanol in a concentration range between 0 and 70 (v/v %), a difference of up to 6 mN/m was found, when compared to commercial ring tensiometry. This discrepancy is still under debate.
Resumo:
Thermal effects are rapidly gaining importance in nanometer heterogeneous integrated systems. Increased power density, coupled with spatio-temporal variability of chip workload, cause lateral and vertical temperature non-uniformities (variations) in the chip structure. The assumption of an uniform temperature for a large circuit leads to inaccurate determination of key design parameters. To improve design quality, we need precise estimation of temperature at detailed spatial resolution which is very computationally intensive. Consequently, thermal analysis of the designs needs to be done at multiple levels of granularity. To further investigate the flow of chip/package thermal analysis we exploit the Intel Single Chip Cloud Computer (SCC) and propose a methodology for calibration of SCC on-die temperature sensors. We also develop an infrastructure for online monitoring of SCC temperature sensor readings and SCC power consumption. Having the thermal simulation tool in hand, we propose MiMAPT, an approach for analyzing delay, power and temperature in digital integrated circuits. MiMAPT integrates seamlessly into industrial Front-end and Back-end chip design flows. It accounts for temperature non-uniformities and self-heating while performing analysis. Furthermore, we extend the temperature variation aware analysis of designs to 3D MPSoCs with Wide-I/O DRAM. We improve the DRAM refresh power by considering the lateral and vertical temperature variations in the 3D structure and adapting the per-DRAM-bank refresh period accordingly. We develop an advanced virtual platform which models the performance, power, and thermal behavior of a 3D-integrated MPSoC with Wide-I/O DRAMs in detail. Moving towards real-world multi-core heterogeneous SoC designs, a reconfigurable heterogeneous platform (ZYNQ) is exploited to further study the performance and energy efficiency of various CPU-accelerator data sharing methods in heterogeneous hardware architectures. A complete hardware accelerator featuring clusters of OpenRISC CPUs, with dynamic address remapping capability is built and verified on a real hardware.
Resumo:
A new control scheme has been presented in this thesis. Based on the NonLinear Geometric Approach, the proposed Active Control System represents a new way to see the reconfigurable controllers for aerospace applications. The presence of the Diagnosis module (providing the estimation of generic signals which, based on the case, can be faults, disturbances or system parameters), mean feature of the depicted Active Control System, is a characteristic shared by three well known control systems: the Active Fault Tolerant Controls, the Indirect Adaptive Controls and the Active Disturbance Rejection Controls. The standard NonLinear Geometric Approach (NLGA) has been accurately investigated and than improved to extend its applicability to more complex models. The standard NLGA procedure has been modified to take account of feasible and estimable sets of unknown signals. Furthermore the application of the Singular Perturbations approximation has led to the solution of Detection and Isolation problems in scenarios too complex to be solved by the standard NLGA. Also the estimation process has been improved, where multiple redundant measuremtent are available, by the introduction of a new algorithm, here called "Least Squares - Sliding Mode". It guarantees optimality, in the sense of the least squares, and finite estimation time, in the sense of the sliding mode. The Active Control System concept has been formalized in two controller: a nonlinear backstepping controller and a nonlinear composite controller. Particularly interesting is the integration, in the controller design, of the estimations coming from the Diagnosis module. Stability proofs are provided for both the control schemes. Finally, different applications in aerospace have been provided to show the applicability and the effectiveness of the proposed NLGA-based Active Control System.
Resumo:
OBJECTIVE: To compare the individual latency distributions of motor evoked potentials (MEP) in patients with multiple sclerosis (MS) to the previously reported results in healthy subjects (Firmin et al., 2011). METHODS: We applied the previously reported method to measure the distribution of MEP latencies to 16 patients with MS. The method is based on transcranial magnetic stimulation and consists of a combination of the triple stimulation technique with a method originally developed to measure conduction velocity distributions in peripheral nerves. RESULTS: MEP latency distributions in MS typically showed two peaks. The individual MEP latency distributions were significantly wider in patients with MS than in healthy subjects. The mean triple stimulation delay extension at the 75% quantile, a proxy for MEP latency distribution width, was 7.3ms in healthy subjects and 10.7ms in patients with MS. CONCLUSIONS: In patients with MS, slow portions of the central motor pathway contribute more to the MEP than in healthy subjects. The bimodal distribution found in healthy subjects is preserved in MS. SIGNIFICANCE: Our method to measure the distribution of MEP latencies is suitable to detect alterations in the relative contribution of corticospinal tract portions with long MEP latencies to motor conduction.
Resumo:
An optimal multiple testing procedure is identified for linear hypotheses under the general linear model, maximizing the expected number of false null hypotheses rejected at any significance level. The optimal procedure depends on the unknown data-generating distribution, but can be consistently estimated. Drawing information together across many hypotheses, the estimated optimal procedure provides an empirical alternative hypothesis by adapting to underlying patterns of departure from the null. Proposed multiple testing procedures based on the empirical alternative are evaluated through simulations and an application to gene expression microarray data. Compared to a standard multiple testing procedure, it is not unusual for use of an empirical alternative hypothesis to increase by 50% or more the number of true positives identified at a given significance level.
Resumo:
Whilst estimation of the marginal (total) causal effect of a point exposure on an outcome is arguably the most common objective of experimental and observational studies in the health and social sciences, in recent years, investigators have also become increasingly interested in mediation analysis. Specifically, upon establishing a non-null total effect of the exposure, investigators routinely wish to make inferences about the direct (indirect) pathway of the effect of the exposure not through (through) a mediator variable that occurs subsequently to the exposure and prior to the outcome. Although powerful semiparametric methodologies have been developed to analyze observational studies, that produce double robust and highly efficient estimates of the marginal total causal effect, similar methods for mediation analysis are currently lacking. Thus, this paper develops a general semiparametric framework for obtaining inferences about so-called marginal natural direct and indirect causal effects, while appropriately accounting for a large number of pre-exposure confounding factors for the exposure and the mediator variables. Our analytic framework is particularly appealing, because it gives new insights on issues of efficiency and robustness in the context of mediation analysis. In particular, we propose new multiply robust locally efficient estimators of the marginal natural indirect and direct causal effects, and develop a novel double robust sensitivity analysis framework for the assumption of ignorability of the mediator variable.
Resumo:
In recent years, researchers in the health and social sciences have become increasingly interested in mediation analysis. Specifically, upon establishing a non-null total effect of an exposure, investigators routinely wish to make inferences about the direct (indirect) pathway of the effect of the exposure not through (through) a mediator variable that occurs subsequently to the exposure and prior to the outcome. Natural direct and indirect effects are of particular interest as they generally combine to produce the total effect of the exposure and therefore provide insight on the mechanism by which it operates to produce the outcome. A semiparametric theory has recently been proposed to make inferences about marginal mean natural direct and indirect effects in observational studies (Tchetgen Tchetgen and Shpitser, 2011), which delivers multiply robust locally efficient estimators of the marginal direct and indirect effects, and thus generalizes previous results for total effects to the mediation setting. In this paper we extend the new theory to handle a setting in which a parametric model for the natural direct (indirect) effect within levels of pre-exposure variables is specified and the model for the observed data likelihood is otherwise unrestricted. We show that estimation is generally not feasible in this model because of the curse of dimensionality associated with the required estimation of auxiliary conditional densities or expectations, given high-dimensional covariates. We thus consider multiply robust estimation and propose a more general model which assumes a subset but not all of several working models holds.
Resumo:
BACKGROUND: Estimation of respiratory deadspace is often based on the CO2 expirogram, however presence of the CO2 sensor increases equipment deadspace, which in turn influences breathing pattern and calculation of lung volume. In addition, it is necessary to correct for the delay between the sensor and flow signals. We propose a new method for estimation of effective deadspace using the molar mass (MM) signal from an ultrasonic flowmeter device, which does not require delay correction. We hypothesize that this estimation is correlated with that calculated from the CO2 signal using the Fowler method. METHODS: Breath-by-breath CO2, MM and flow measurements were made in a group of 77 term-born healthy infants. Fowler deadspace (Vd,Fowler) was calculated after correcting for the flow-dependent delay in the CO2 signal. Deadspace estimated from the MM signal (Vd,MM) was defined as the volume passing through the flowhead between start of expiration and the 10% rise point in MM. RESULTS: Correlation (r = 0.456, P < 0.0001) was found between Vd,MM and Vd,Fowler averaged over all measurements, with a mean difference of -1.4% (95% CI -4.1 to 1.3%). Vd,MM ranged from 6.6 to 11.4 ml between subjects, while Vd,Fowler ranged from 5.9 to 12.0 ml. Mean intra-measurement CV over 5-10 breaths was 7.8 +/- 5.6% for Vd,MM and 7.8 +/- 3.7% for Vd,Fowler. Mean intra-subject CV was 6.0 +/- 4.5% for Vd,MM and 8.3 +/- 5.9% for Vd,Fowler. Correcting for the CO2 signal delay resulted in a 12% difference (P = 0.022) in Vd,Fowler. Vd,MM could be obtained more frequently than Vd,Fowler in infants with CLD, with a high variability. CONCLUSIONS: Use of the MM signal provides a feasible estimate of Fowler deadspace without introducing additional equipment deadspace. The simple calculation without need for delay correction makes individual adjustment for deadspace in FRC measurements possible. This is especially important given the relative large range of deadspace seen in this homogeneous group of infants.
Resumo:
Multiple outcomes data are commonly used to characterize treatment effects in medical research, for instance, multiple symptoms to characterize potential remission of a psychiatric disorder. Often either a global, i.e. symptom-invariant, treatment effect is evaluated. Such a treatment effect may over generalize the effect across the outcomes. On the other hand individual treatment effects, varying across all outcomes, are complicated to interpret, and their estimation may lose precision relative to a global summary. An effective compromise to summarize the treatment effect may be through patterns of the treatment effects, i.e. "differentiated effects." In this paper we propose a two-category model to differentiate treatment effects into two groups. A model fitting algorithm and simulation study are presented, and several methods are developed to analyze heterogeneity presenting in the treatment effects. The method is illustrated using an analysis of schizophrenia symptom data.