34 resultados para Remediation time estimation
em Université de Lausanne, Switzerland
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
The subtribe Gentianinae comprises ca. 425 species, most of them within the well-studied genus Gentiana and mainly distributed over the Eurasian continent. Phylogenetic relationships between Gentiana and its closest relatives, the climbing gentians (Crawfurdia, Tripterospermum) and the new genus Metagentiana, remain unclear. All three genera were recently found to be polyphyletic, possibly because of poor sampling of Tripterospermum and Crawfurdia. Highest diversity of Gentianinae occurs in the western Himalaya, but the absence of uncontroversial fossil evidence limits our understanding of its biogeography. In the present study, we generated ITS and atpB-rbcL sequences for 19 species of Tripterospermum, 9 of Crawfurdia and 11 of Metagentiana, together representing about 60 percent of the species diversity of these genera. Our results show that only Metagentiana is polyphyletic and divided into three monophyletic entities. No unambiguous synapomorphies were associated with the three Metagentiana entities. Different combinations of three approximate calibration points were used to generate three divergence time estimation scenarios. Although dating hypotheses were mostly inconsistent, they concurred in associating radiation of Gentiana to an orogenic phase of the Himalaya between 15 and 10 million years ago. Our study illustrates the conceptual difficulties in addressing the time frame of diversification in a group lacking sufficient fossil number and quality.
Resumo:
Il est important pour les entreprises de compresser les informations détaillées dans des sets d'information plus compréhensibles. Au chapitre 1, je résume et structure la littérature sur le sujet « agrégation d'informations » en contrôle de gestion. Je récapitule l'analyse coûts-bénéfices que les comptables internes doivent considérer quand ils décident des niveaux optimaux d'agrégation d'informations. Au-delà de la perspective fondamentale du contenu d'information, les entreprises doivent aussi prendre en considération des perspectives cogni- tives et comportementales. Je développe ces aspects en faisant la part entre la comptabilité analytique, les budgets et plans, et la mesure de la performance. Au chapitre 2, je focalise sur un biais spécifique qui se crée lorsque les informations incertaines sont agrégées. Pour les budgets et plans, des entreprises doivent estimer les espérances des coûts et des durées des projets, car l'espérance est la seule mesure de tendance centrale qui est linéaire. A la différence de l'espérance, des mesures comme le mode ou la médiane ne peuvent pas être simplement additionnés. En considérant la forme spécifique de distributions des coûts et des durées, l'addition des modes ou des médianes résultera en une sous-estimation. Par le biais de deux expériences, je remarque que les participants tendent à estimer le mode au lieu de l'espérance résultant en une distorsion énorme de l'estimati¬on des coûts et des durées des projets. Je présente également une stratégie afin d'atténuer partiellement ce biais. Au chapitre 3, j'effectue une étude expérimentale pour comparer deux approches d'esti¬mation du temps qui sont utilisées en comptabilité analytique, spécifiquement « coûts basés sur les activités (ABC) traditionnelles » et « time driven ABC » (TD-ABC). Au contraire des affirmations soutenues par les défenseurs de l'approche TD-ABC, je constate que cette dernière n'est pas nécessairement appropriée pour les calculs de capacité. Par contre, je démontre que le TD-ABC est plus approprié pour les allocations de coûts que l'approche ABC traditionnelle. - It is essential for organizations to compress detailed sets of information into more comprehensi¬ve sets, thereby, establishing sharp data compression and good decision-making. In chapter 1, I review and structure the literature on information aggregation in management accounting research. I outline the cost-benefit trade-off that management accountants need to consider when they decide on the optimal levels of information aggregation. Beyond the fundamental information content perspective, organizations also have to account for cognitive and behavi¬oral perspectives. I elaborate on these aspects differentiating between research in cost accounti¬ng, budgeting and planning, and performance measurement. In chapter 2, I focus on a specific bias that arises when probabilistic information is aggregated. In budgeting and planning, for example, organizations need to estimate mean costs and durations of projects, as the mean is the only measure of central tendency that is linear. Different from the mean, measures such as the mode or median cannot simply be added up. Given the specific shape of cost and duration distributions, estimating mode or median values will result in underestimations of total project costs and durations. In two experiments, I find that participants tend to estimate mode values rather than mean values resulting in large distortions of estimates for total project costs and durations. I also provide a strategy that partly mitigates this bias. In the third chapter, I conduct an experimental study to compare two approaches to time estimation for cost accounting, i.e., traditional activity-based costing (ABC) and time-driven ABC (TD-ABC). Contrary to claims made by proponents of TD-ABC, I find that TD-ABC is not necessarily suitable for capacity computations. However, I also provide evidence that TD-ABC seems better suitable for cost allocations than traditional ABC.
Resumo:
Pulse wave velocity (PWV) is a surrogate of arterial stiffness and represents a non-invasive marker of cardiovascular risk. The non-invasive measurement of PWV requires tracking the arrival time of pressure pulses recorded in vivo, commonly referred to as pulse arrival time (PAT). In the state of the art, PAT is estimated by identifying a characteristic point of the pressure pulse waveform. This paper demonstrates that for ambulatory scenarios, where signal-to-noise ratios are below 10 dB, the performance in terms of repeatability of PAT measurements through characteristic points identification degrades drastically. Hence, we introduce a novel family of PAT estimators based on the parametric modeling of the anacrotic phase of a pressure pulse. In particular, we propose a parametric PAT estimator (TANH) that depicts high correlation with the Complior(R) characteristic point D1 (CC = 0.99), increases noise robustness and reduces by a five-fold factor the number of heartbeats required to obtain reliable PAT measurements.
Resumo:
Introduction: Estimation of the time since death based on the gastric content is still a controversy subject. Many studies have been achieved leaving the same incertitude: the intra- and inter-individual variability. Aim: After a homicidal case where a specialized gastroenterologist was cited to estimate the time of death based on the gastric contents and his experience in clinical practice. Consequently we decided to make a review of the scientific literature to see if that method was more reliable nowadays. Material and methods: We chose articles from 1979 that describe the estimation of the gastric emptying rate according to several factors and the forensic articles about the estimation of the time of death in relation with the gastric content. Results: Most of the articles cited by the specialized gastroenterologist were studies about living healthy people and the effects of several factors (medication, supine versus upside-down position, body mass index or different type of food). Forensic articles frequently concluded that the estimation of the time since death by analyzing the gastric content can be used but not as the unique method. Conclusion: Estimation of the time since death by analyze of the gastric contents is a method that can be used nowadays. But it cannot be the only method as the inter- and intra-individual variability remains an important bias.
Resumo:
The geochemical compositions of biogenic carbonates are increasingly used for palaeoenvironmental reconstructions. The skeletal delta O-18 temperature relationship is dependent on water salinity, so many recent studies have focused on the Mg/Ca and Sr/Ca ratios because those ratios in water do not change significantly on short time scales. Thus, those elemental ratios are considered to be good palaeotemperature proxies in many biominerals, although their use remains ambiguous in bivalve shells. Here, we present the high-resolution Mg/Ca ratios of two modern species of juvenile and adult oyster shells, Crassostrea gigas and Ostrea edulis. These specimens were grown in controlled conditions for over one year in two different locations. In situ monthly Mn-marking of the shells has been used for day calibration. The daily Mg/Ca.ratios in the shell have been measured with an electron microprobe. The high frequency Mg/Ca variation of all specimens displays good synchronism with lunar cycles, suggesting that tides strongly influence the incorporation of Mg/Ca into the shells. Highly significant correlation coefficients (0.70<R<0.83, p<0.0001) between the Mg/Ca ratios and the seawater temperature are obtained only for juvenile C. gigas samples, while metabolic control of Mg/Ca incorporation and lower shell growth rates preclude the use of the Mg/Ca ratio in adult shells as a palaeothermometer. Data from three juvenile C. gigas shells from the two study sites are selected to establish a relationship: T = 3.77Mg/Ca + 1.88, where T is in degrees C and Mg/Ca in mmol/mol. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
Captan and folpet are two fungicides largely used in agriculture, but biomonitoring data are mostly limited to measurements of captan metabolite concentrations in spot urine samples of workers, which complicate interpretation of results in terms of internal dose estimation, daily variations according to tasks performed, and most plausible routes of exposure. This study aimed at performing repeated biological measurements of exposure to captan and folpet in field workers (i) to better assess internal dose along with main routes-of-entry according to tasks and (ii) to establish most appropriate sampling and analysis strategies. The detailed urinary excretion time courses of specific and non-specific biomarkers of exposure to captan and folpet were established in tree farmers (n = 2) and grape growers (n = 3) over a typical workweek (seven consecutive days), including spraying and harvest activities. The impact of the expression of urinary measurements [excretion rate values adjusted or not for creatinine or cumulative amounts over given time periods (8, 12, and 24 h)] was evaluated. Absorbed doses and main routes-of-entry were then estimated from the 24-h cumulative urinary amounts through the use of a kinetic model. The time courses showed that exposure levels were higher during spraying than harvest activities. Model simulations also suggest a limited absorption in the studied workers and an exposure mostly through the dermal route. It further pointed out the advantage of expressing biomarker values in terms of body weight-adjusted amounts in repeated 24-h urine collections as compared to concentrations or excretion rates in spot samples, without the necessity for creatinine corrections.
Resumo:
Functional magnetic resonance imaging (fMRI) was used to measure changes in cerebral activity in patients with schizophrenia after participation in the Cognitive Remediation Program for Schizophrenia and other related disorders (RECOS). As RECOS therapists make use of problem-solving and verbal mediation techniques, known to be beneficial in the rehabilitation of dysexecutive syndromes, we expected an increased activation of frontal areas after remediation. Executive functioning and cerebral activation during a covert verbal fluency task were measured in eight patients with schizophrenia before (T1) and after (T2) 14 weeks of RECOS therapy. The same measures were recorded in eight patients with schizophrenia who did not participate in RECOS at the same intervals of time (TAU group). Increased activation in Broca's area, as well as improvements in performance of executive/frontal tasks, was observed after cognitive training. Metacognitive techniques of verbalization are hypothesized to be the main factor underlying the brain changes observed in the present study.
Resumo:
BACKGROUND: In vitro aggregating brain cell cultures containing all types of brain cells have been shown to be useful for neurotoxicological investigations. The cultures are used for the detection of nervous system-specific effects of compounds by measuring multiple endpoints, including changes in enzyme activities. Concentration-dependent neurotoxicity is determined at several time points. METHODS: A Markov model was set up to describe the dynamics of brain cell populations exposed to potentially neurotoxic compounds. Brain cells were assumed to be either in a healthy or stressed state, with only stressed cells being susceptible to cell death. Cells may have switched between these states or died with concentration-dependent transition rates. Since cell numbers were not directly measurable, intracellular lactate dehydrogenase (LDH) activity was used as a surrogate. Assuming that changes in cell numbers are proportional to changes in intracellular LDH activity, stochastic enzyme activity models were derived. Maximum likelihood and least squares regression techniques were applied for estimation of the transition rates. Likelihood ratio tests were performed to test hypotheses about the transition rates. Simulation studies were used to investigate the performance of the transition rate estimators and to analyze the error rates of the likelihood ratio tests. The stochastic time-concentration activity model was applied to intracellular LDH activity measurements after 7 and 14 days of continuous exposure to propofol. The model describes transitions from healthy to stressed cells and from stressed cells to death. RESULTS: The model predicted that propofol would affect stressed cells more than healthy cells. Increasing propofol concentration from 10 to 100 μM reduced the mean waiting time for transition to the stressed state by 50%, from 14 to 7 days, whereas the mean duration to cellular death reduced more dramatically from 2.7 days to 6.5 hours. CONCLUSION: The proposed stochastic modeling approach can be used to discriminate between different biological hypotheses regarding the effect of a compound on the transition rates. The effects of different compounds on the transition rate estimates can be quantitatively compared. Data can be extrapolated at late measurement time points to investigate whether costs and time-consuming long-term experiments could possibly be eliminated.
Resumo:
Introduction: Osteoporosis (OP) is a systemic skeletal disease characterized by a low bone mineral density (BMD) and a micro-architectural (MA) deterioration. Clinical risk factors (CRF) are often used as a MA approximation. MA is yet evaluable in daily practice by the Trabecular Bone Score (TBS) measure. TBS is a novel grey-level texture measurement reflecting bone micro-architecture based on the use of experimental variograms of 2D projection images. TBS is very simple to obtain, by reanalyzing a lumbar DXA-scan. TBS has proven to have diagnosis and prognosis value, partially independent of CRF and BMD. The aim of the OsteoLaus cohort is to combine in daily practice the CRF and the information given by DXA (BMD, TBS and vertebral fracture assessment (VFA)) to better identify women at high fracture risk. Method: The OsteoLaus cohort (1400 women 50 to 80 years living in Lausanne, Switzerland) started in 2010. This study is derived from the cohort COLAUS who started in Lausanne in 2003. The main goals of COLAUS is to obtain information on the epidemiology and genetic determinants of cardiovascular risk in 6700 men and women. CRF for OP, bone ultrasound of the heel, lumbar spine and hip BMD, VFA by DXA and MA evaluation by TBS are recorded in OsteoLaus. Preliminary results are reported. Results: We included 631 women: mean age 67.4±6.7 y, BMI 26.1±4.6, mean lumbar spine BMD 0.943±0.168 (T-score -1.4 SD), TBS 1.271±0.103. As expected, correlation between BMD and site matched TBS is low (r2=0.16). Prevalence of VFx grade 2/3, major OP Fx and all OP Fx is 8.4%, 17.0% and 26.0% respectively. Age- and BMI-adjusted ORs (per SD decrease) are 1.8 (1.2- 2.5), 1.6 (1.2-2.1), 1.3 (1.1-1.6) for BMD for the different categories of fractures and 2.0 (1.4-3.0), 1.9 (1.4-2.5), 1.4 (1.1-1.7) for TBS respectively. Only 32 to 37% of women with OP Fx have a BMD < -2.5 SD or a TBS < 1.200. If we combine a BMD < -2.5 SD or a TBS < 1.200, 54 to 60% of women with an osteoporotic Fx are identified. Conclusion: As in the already published studies, these preliminary results confirm the partial independence between BMD and TBS. More importantly, a combination of TBS subsequent to BMD increases significantly the identification of women with prevalent OP Fx which would have been miss-classified by BMD alone. For the first time we are able to have complementary information about fracture (VFA), density (BMD), micro- and macro architecture (TBS & HAS) from a simple, low ionizing radiation and cheap device: DXA. Such complementary information is very useful for the patient in the daily practice and moreover will likely have an impact on cost effectiveness analysis.
Resumo:
RÉSUMÉ Le but d'un traitement antimicrobien est d'éradiquer une infection bactérienne. Cependant, il est souvent difficile d'en évaluer rapidement l'efficacité en utilisant les techniques standard. L'estimation de la viabilité bactérienne par marqueurs moléculaires permettrait d'accélérer le processus. Ce travail étudie donc la possibilité d'utiliser le RNA ribosomal (rRNA) à cet effet. Des cultures de Streptococcus gordonii sensibles (parent Wt) et tolérants (mutant Tol 1) à l'action bactéricide de la pénicilline ont été exposées à différents antibiotiques. La survie bactérienne au cours du temps a été déterminée en comparant deux méthodes. La méthode de référence par compte viable a été comparée à une méthode moléculaire consistant à amplifier par PCR quantitative en temps réel une partie du génome bactérien. La cible choisie devait refléter la viabilité cellulaire et par conséquent être synthétisée de manière constitutive lors de la vie de la bactérie et être détruite rapidement lors de la mort cellulaire. Le choix s'est porté sur un fragment du gène 16S-rRNA. Ce travail a permis de valider ce choix en corrélant ce marqueur moléculaire à la viabilité bactérienne au cours d'un traitement antibiotique bactéricide. De manière attendue, les S. gordonii sensibles à la pénicilline ont perdu ≥ 4 log10 CFU/ml après 48 heures de traitement par pénicilline alors que le mutant tolérant Tol1 en a perdu ≥ 1 log10 CFU/ml. De manière intéressant, la quantité de marqueur a augmenté proportionnellement au compte viable durant la phase de croissance bactérienne. Après administration du traitement antibiotique, l'évolution du marqueur dépendait de la capacité de la bactérie à survivre à l'action de l'antibiotique. Stable lors du traitement des souches tolérantes, la quantité de marqueur détectée diminuait de manière proportionnelle au compte viable lors du traitement des souches sensibles. Cette corrélation s'est confirmée lors de l'utilisation d'autres antibiotiques bactéricides. En conclusion, l'amplification par PCR du RNA ribosomal 16S permet d'évaluer rapidement la viabilité bactérienne au cours d'un traitement antibiotique en évitant le recours à la mise en culture dont les résultats ne sont obtenus qu'après plus de 24 heures. Cette méthode offre donc au clinicien une évaluation rapide de l'efficacité du traitement, particulièrement dans les situations, comme le choc septique, où l'initiation sans délai d'un traitement efficace est une des conditions essentielles du succès thérapeutique. ABSTRACT Assessing bacterial viability by molecular markers might help accelerate the measurement of antibiotic-induced killing. This study investigated whether ribosomal RNA (rRNA) could be suitable for this purpose. Cultures of penicillin-susceptible and penicillin-tolerant (Tol1 mutant) Streptococcus gordonii were exposed to mechanistically different penicillin and levofloxacin. Bacterial survival was assessed by viable counts, and compared to quantitative real-time PCR amplification of either the 16S-rRNA genes (rDNA) or the 16S rRNA, following reverse transcription. Penicillin-susceptible S. gordonii lost ≥ 4 log10 CFU/ml of viability over 48 h of penicillin treatment. In comparison, the Toll mutant lost ≤ 1 log10 CFU/ml. Amplification of a 427-base fragment of 16S rDNA yielded amplicons that increased proportionally to viable counts during bacterial growth, but did not decrease during drug-induced killing. In contrast, the same 427-base fragment amplified from 16S rDNA paralleled both bacterial growth and drug-induced killing. It also differentiated between penicillin-induced killing of the parent and the Toll mutant (≥4 log10 CFU/ml and ≤1 lo10 CFU/ml, respectively), and detected killing by mechanistically unrelated levofloxacin. Since large fragments of polynucleotides might be degraded faster than smaller fragments the experiments were repeated by amplifying a 119-base region internal to the origina1 427-base fragment. The amount of 119-base amplicons increased proportionally to viability during growth, but remained stable during drug treatment. Thus, 16S rRNA was a marker of antibiotic-induced killing, but the size of the amplified fragment was critical to differentiate between live and dead bacteria.
Resumo:
Time-lapse geophysical data acquired during transient hydrological experiments are being increasingly employed to estimate subsurface hydraulic properties at the field scale. In particular, crosshole ground-penetrating radar (GPR) data, collected while water infiltrates into the subsurface either by natural or artificial means, have been demonstrated in a number of studies to contain valuable information concerning the hydraulic properties of the unsaturated zone. Previous work in this domain has considered a variety of infiltration conditions and different amounts of time-lapse GPR data in the estimation procedure. However, the particular benefits and drawbacks of these different strategies as well as the impact of a variety of key and common assumptions remain unclear. Using a Bayesian Markov-chain-Monte-Carlo stochastic inversion methodology, we examine in this paper the information content of time-lapse zero-offset-profile (ZOP) GPR traveltime data, collected under three different infiltration conditions, for the estimation of van Genuchten-Mualem (VGM) parameters in a layered subsurface medium. Specifically, we systematically analyze synthetic and field GPR data acquired under natural loading and two rates of forced infiltration, and we consider the value of incorporating different amounts of time-lapse measurements into the estimation procedure. Our results confirm that, for all infiltration scenarios considered, the ZOP GPR traveltime data contain important information about subsurface hydraulic properties as a function of depth, with forced infiltration offering the greatest potential for VGM parameter refinement because of the higher stressing of the hydrological system. Considering greater amounts of time-lapse data in the inversion procedure is also found to help refine VGM parameter estimates. Quite importantly, however, inconsistencies observed in the field results point to the strong possibility that posterior uncertainties are being influenced by model structural errors, which in turn underlines the fundamental importance of a systematic analysis of such errors in future related studies.
Resumo:
The clinical demand for a device to monitor Blood Pressure (BP) in ambulatory scenarios with minimal use of inflation cuffs is increasing. Based on the so-called Pulse Wave Velocity (PWV) principle, this paper introduces and evaluates a novel concept of BP monitor that can be fully integrated within a chest sensor. After a preliminary calibration, the sensor provides non-occlusive beat-by-beat estimations of Mean Arterial Pressure (MAP) by measuring the Pulse Transit Time (PTT) of arterial pressure pulses travelling from the ascending aorta towards the subcutaneous vasculature of the chest. In a cohort of 15 healthy male subjects, a total of 462 simultaneous readings consisting of reference MAP and chest PTT were acquired. Each subject was recorded at three different days: D, D+3 and D+14. Overall, the implemented protocol induced MAP values to range from 80 ± 6 mmHg in baseline, to 107 ± 9 mmHg during isometric handgrip maneuvers. Agreement between reference and chest-sensor MAP values was tested by using intraclass correlation coefficient (ICC = 0.78) and Bland-Altman analysis (mean error = 0.7 mmHg, standard deviation = 5.1 mmHg). The cumulative percentage of MAP values provided by the chest sensor falling within a range of ±5 mmHg compared to reference MAP readings was of 70%, within ±10 mmHg was of 91%, and within ±15mmHg was of 98%. These results point at the fact that the chest sensor complies with the British Hypertension Society (BHS) requirements of Grade A BP monitors, when applied to MAP readings. Grade A performance was maintained even two weeks after having performed the initial subject-dependent calibration. In conclusion, this paper introduces a sensor and a calibration strategy to perform MAP measurements at the chest. The encouraging performance of the presented technique paves the way towards an ambulatory-compliant, continuous and non-occlusive BP monitoring system.
Resumo:
Biochemical systems are commonly modelled by systems of ordinary differential equations (ODEs). A particular class of such models called S-systems have recently gained popularity in biochemical system modelling. The parameters of an S-system are usually estimated from time-course profiles. However, finding these estimates is a difficult computational problem. Moreover, although several methods have been recently proposed to solve this problem for ideal profiles, relatively little progress has been reported for noisy profiles. We describe a special feature of a Newton-flow optimisation problem associated with S-system parameter estimation. This enables us to significantly reduce the search space, and also lends itself to parameter estimation for noisy data. We illustrate the applicability of our method by applying it to noisy time-course data synthetically produced from previously published 4- and 30-dimensional S-systems. In addition, we propose an extension of our method that allows the detection of network topologies for small S-systems. We introduce a new method for estimating S-system parameters from time-course profiles. We show that the performance of this method compares favorably with competing methods for ideal profiles, and that it also allows the determination of parameters for noisy profiles.