904 resultados para Linear differential systems, time window, spectral approximation, waveform relaxation
Resumo:
Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.
Resumo:
In dieser Arbeit wurden wässrige Suspensionen ladungsstabilisierter kolloidaler Partikel bezüglich ihres Verhaltens unter dem Einfluss elektrischer Felder untersucht. Insbesondere wurde die elektrophoretische Mobilität µ über einen weiten Partikelkonzentrationsbereich studiert, um das individuelle Verhalten einzelner Partikel mit dem bisher nur wenig untersuchten kollektiven Verhalten von Partikelensembles (speziell von fluid oder kristallin geordneten Ensembles) zu vergleichen. Dazu wurde ein superheterodynes Dopplervelocimetrisches Lichtstreuexperiment mit integraler und lokaler Datenerfassung konzipiert, das es erlaubt, die Geschwindigkeit der Partikel in elektrischen Feldern zu studieren. Das Experiment wurde zunächst erfolgreich im Bereich nicht-ordnender und fluid geordneter Suspensionen getestet. Danach konnte mit diesem Gerät erstmals das elektrophoretische Verhalten von kristallin geordneten Suspensionen untersucht werden. Es wurde ein komplexes Fließverhalten beobachtet und ausführlich dokumentiert. Dabei wurden bisher in diesem Zusammenhang noch nicht beobachtete Effekte wie Blockfluss, Scherbandbildung, Scherschmelzen oder elastische Resonanzen gefunden. Andererseits machte dieses Verhalten die Entwicklung einer neuen Auswertungsroutine für µ im kristallinen Zustand notwendig, wozu die heterodyne Lichtstreutheorie auf den superheterodynen Fall mit Verscherung erweitert werden musste. Dies wurde zunächst für nicht geordnete Systeme durchgeführt. Diese genäherte Beschreibung genügte, um unter den gegebenen Versuchbedingungen auch das Lichtstreuverhalten gescherter kristalliner Systeme zu interpretieren. Damit konnte als weiteres wichtiges Resultat eine generelle Mobilitäts-Konzentrations-Kurve erhalten werden. Diese zeigt bei geringen Partikelkonzentrationen den bereits bekannten Anstieg und bei mittleren Konzentrationen ein Plateau. Bei hohen Konzentrationen sinkt die Mobilität wieder ab. Zur Interpretation dieses Verhaltens bzgl. Partikelladung stehen derzeit nur Theorien für nicht wechselwirkende Partikel zur Verfügung. Wendet man diese an, so findet man eine überraschend gute Übereinstimmung der elektrophoretisch bestimmten Partikelladung Z*µ mit numerisch bestimmten effektiven Partikelladungen Z*PBC.
Resumo:
The main goal of this thesis is to understand and link together some of the early works by Michel Rumin and Pierre Julg. The work is centered around the so-called Rumin complex, which is a construction in subRiemannian geometry. A Carnot manifold is a manifold endowed with a horizontal distribution. If further a metric is given, one gets a subRiemannian manifold. Such data arise in different contexts, such as: - formulation of the second principle of thermodynamics; - optimal control; - propagation of singularities for sums of squares of vector fields; - real hypersurfaces in complex manifolds; - ideal boundaries of rank one symmetric spaces; - asymptotic geometry of nilpotent groups; - modelization of human vision. Differential forms on a Carnot manifold have weights, which produces a filtered complex. In view of applications to nilpotent groups, Rumin has defined a substitute for the de Rham complex, adapted to this filtration. The presence of a filtered complex also suggests the use of the formal machinery of spectral sequences in the study of cohomology. The goal was indeed to understand the link between Rumin's operator and the differentials which appear in the various spectral sequences we have worked with: - the weight spectral sequence; - a special spectral sequence introduced by Julg and called by him Forman's spectral sequence; - Forman's spectral sequence (which turns out to be unrelated to the previous one). We will see that in general Rumin's operator depends on choices. However, in some special cases, it does not because it has an alternative interpretation as a differential in a natural spectral sequence. After defining Carnot groups and analysing their main properties, we will introduce the concept of weights of forms which will produce a splitting on the exterior differential operator d. We shall see how the Rumin complex arises from this splitting and proceed to carry out the complete computations in some key examples. From the third chapter onwards we will focus on Julg's paper, describing his new filtration and its relationship with the weight spectral sequence. We will study the connection between the spectral sequences and Rumin's complex in the n-dimensional Heisenberg group and the 7-dimensional quaternionic Heisenberg group and then generalize the result to Carnot groups using the weight filtration. Finally, we shall explain why Julg required the independence of choices in some special Rumin operators, introducing the Szego map and describing its main properties.
Resumo:
Detection, localization and tracking of non-collaborative objects moving inside an area is of great interest to many surveillance applications. An ultra- wideband (UWB) multistatic radar is considered as a good infrastructure for such anti-intruder systems, due to the high range resolution provided by the UWB impulse-radio and the spatial diversity achieved with a multistatic configuration. Detection of targets, which are typically human beings, is a challenging task due to reflections from unwanted objects in the area, shadowing, antenna cross-talks, low transmit power, and the blind zones arised from intrinsic peculiarities of UWB multistatic radars. Hence, we propose more effective detection, localization, as well as clutter removal techniques for these systems. However, the majority of the thesis effort is devoted to the tracking phase, which is an essential part for improving the localization accuracy, predicting the target position and filling out the missed detections. Since UWB radars are not linear Gaussian systems, the widely used tracking filters, such as the Kalman filter, are not expected to provide a satisfactory performance. Thus, we propose the Bayesian filter as an appropriate candidate for UWB radars. In particular, we develop tracking algorithms based on particle filtering, which is the most common approximation of Bayesian filtering, for both single and multiple target scenarios. Also, we propose some effective detection and tracking algorithms based on image processing tools. We evaluate the performance of our proposed approaches by numerical simulations. Moreover, we provide experimental results by channel measurements for tracking a person walking in an indoor area, with the presence of a significant clutter. We discuss the existing practical issues and address them by proposing more robust algorithms.
Resumo:
Zusammenfassung In der vorliegenden Arbeit besch¨aftige ich mich mit Differentialgleichungen von Feynman– Integralen. Ein Feynman–Integral h¨angt von einem Dimensionsparameter D ab und kann f¨ur ganzzahlige Dimension als projektives Integral dargestellt werden. Dies ist die sogenannte Feynman–Parameter Darstellung. In Abh¨angigkeit der Dimension kann ein solches Integral divergieren. Als Funktion in D erh¨alt man eine meromorphe Funktion auf ganz C. Ein divergentes Integral kann also durch eine Laurent–Reihe ersetzt werden und dessen Koeffizienten r¨ucken in das Zentrum des Interesses. Diese Vorgehensweise wird als dimensionale Regularisierung bezeichnet. Alle Terme einer solchen Laurent–Reihe eines Feynman–Integrals sind Perioden im Sinne von Kontsevich und Zagier. Ich beschreibe eine neue Methode zur Berechnung von Differentialgleichungen von Feynman– Integralen. ¨ Ublicherweise verwendet man hierzu die sogenannten ”integration by parts” (IBP)– Identit¨aten. Die neue Methode verwendet die Theorie der Picard–Fuchs–Differentialgleichungen. Im Falle projektiver oder quasi–projektiver Variet¨aten basiert die Berechnung einer solchen Differentialgleichung auf der sogenannten Griffiths–Dwork–Reduktion. Zun¨achst beschreibe ich die Methode f¨ur feste, ganzzahlige Dimension. Nach geeigneter Verschiebung der Dimension erh¨alt man direkt eine Periode und somit eine Picard–Fuchs–Differentialgleichung. Diese ist inhomogen, da das Integrationsgebiet einen Rand besitzt und daher nur einen relativen Zykel darstellt. Mit Hilfe von dimensionalen Rekurrenzrelationen, die auf Tarasov zur¨uckgehen, kann in einem zweiten Schritt die L¨osung in der urspr¨unglichen Dimension bestimmt werden. Ich beschreibe außerdem eine Methode, die auf der Griffiths–Dwork–Reduktion basiert, um die Differentialgleichung direkt f¨ur beliebige Dimension zu berechnen. Diese Methode ist allgemein g¨ultig und erspart Dimensionswechsel. Ein Erfolg der Methode h¨angt von der M¨oglichkeit ab, große Systeme von linearen Gleichungen zu l¨osen. Ich gebe Beispiele von Integralen von Graphen mit zwei und drei Schleifen. Tarasov gibt eine Basis von Integralen an, die Graphen mit zwei Schleifen und zwei externen Kanten bestimmen. Ich bestimme Differentialgleichungen der Integrale dieser Basis. Als wichtigstes Beispiel berechne ich die Differentialgleichung des sogenannten Sunrise–Graphen mit zwei Schleifen im allgemeinen Fall beliebiger Massen. Diese ist f¨ur spezielle Werte von D eine inhomogene Picard–Fuchs–Gleichung einer Familie elliptischer Kurven. Der Sunrise–Graph ist besonders interessant, weil eine analytische L¨osung erst mit dieser Methode gefunden werden konnte, und weil dies der einfachste Graph ist, dessen Master–Integrale nicht durch Polylogarithmen gegeben sind. Ich gebe außerdem ein Beispiel eines Graphen mit drei Schleifen. Hier taucht die Picard–Fuchs–Gleichung einer Familie von K3–Fl¨achen auf.
Resumo:
Kulturlandschaften als Ausdruck einer über viele Jahrhunderte währenden intensiven Interaktion zwischen Menschen und der sie umgebenden natürlichen Umwelt, sind ein traditionelles Forschungsobjekt der Geographie. Mensch/Natur-Interaktionen führen zu Veränderungen der natürlichen Umwelt, indem Menschen Landschaften kultivieren und modifizieren. Die Mensch/Natur-Interaktionen im Weinbau sind intensiv rückgekoppelt, Veränderungen der natürlichen Umwelt wirken auf die in den Kulturlandschaften lebenden und wirtschaftenden Winzer zurück und beeinflussen deren weiteres Handeln, was wiederum Einfluss auf die Entwicklung der gesamten Weinbau-Kulturlandschaft hat. Kulturlandschaft wird aus diesem Grund als ein heterogenes Wirkungsgefüge sozialer und natürlicher Elemente konzeptionalisiert, an dessen Entwicklung soziale und natürliche Elemente gleichzeitig und wechselseitig beteiligt sind. Grundlegend für die vorliegende Arbeit ist die Überzeugung, dass sich Kulturlandschaften durch Mensch/Natur-Interaktionen permanent neu organisieren und nie in einen Gleichgewichtszustand geraten, sondern sich ständig weiterentwickeln und wandeln. Die Komplexitätstheorie bietet hierfür die geeignete theoretische Grundlage. Sie richtet ihren Fokus auf die Entwicklung und den Wandel von Systemen und sucht dabei nach den Funktionsweisen von Systemzusammenhängen, um ein Verständnis für das Gesamtsystemverhalten von nicht-linearen dynamischen Systemen zu erreichen. Auf der Grundlage der Komplexitätstheorie wird ein Untersuchungsschema entwickelt, dass es ermöglich, die sozio-ökonomischen und raum-strukturellen Veränderungsprozesse in der Kulturlandschaftsentwicklung als sich wechselseitig beeinflussenden Systemzusammenhang zu erfassen. Die Rekonstruktion von Entwicklungsphasen, die Analysen von raum-strukturellen Mustern und Akteurskonstellationen sowie die Identifikation von Bifurkationspunkten in der Systemgeschichte sind dabei von übergeordneter Bedeutung. Durch die Untersuchung sowohl der physisch-räumlichen als auch der sozio-ökonomischen Dimension der Kulturlandschaftsentwicklung im Weinbau des Oberen Mittelrheintals soll ein Beitrag für die geographische Erforschung von Mensch/Natur-Interaktionen im Schnittstellenbereich von Physischer Geographie und Humangeographie geleistet werden. Die Anwendung des Untersuchungsschemas erfolgt auf den Weinbau im Oberen Mittelrheintal. Das Anbaugebiet ist seit vielen Jahrzehnten einem starken Rückgang an Weinbaubetrieben und Rebfläche unterworfen. Die rückläufigen Entwicklungen seit 1950 verliefen dabei nicht linear, sondern differenzierten das System in unterschiedliche Entwicklungspfade aus. Die Betriebsstrukturen und die Rahmenbedingungen im Weinbau veränderten sich grundlegend, was sichtbare Spuren in der Kulturlandschaft hinterließ. Dies zu rekonstruieren, zu analysieren und die zu verschiedenen Phasen der Entwicklung bedeutenden externen und internen Einflussfaktoren zu identifizieren, soll dazu beitragen, ein tief greifendes Verständnis für das selbstorganisierte Systemverhalten zu generieren und darauf basierende Handlungsoptionen für zukünftige Eingriffe in die Systementwicklung aufzuzeigen
Resumo:
Alpine snowbeds are habitats where the major limiting factors for plant growth are herbivory and a small time window for growth due to late snowmelt. Despite these limitations, snowbed vegetation usually forms a dense carpet of palatable plants due to favourable abiotic conditions for plant growth within the short growing season. These environmental characteristics make snowbeds particularly interesting to study the interplay of facilitation and competition. We hypothesised an interplay between resource competition and facilitation against herbivory. Further, we investigated whether these predicted neighbour effects were species-specific and/or dependent on ontogeny, and whether the balance of positive and negative plant–plant interactions shifted along a snowmelt gradient. We determined the neighbour effects by means of neighbour removal experiments along the snowmelt gradient, and linear mixed model analyses. The results showed that the effects of neighbour removal were weak but generally consistent among species and snowmelt dates, and depended on whether biomass production or survival was considered. Higher total biomass and increased fruiting in removal plots indicated that plants competed for nutrients, water, and light, thereby supporting the hypothesis of prevailing competition for resources in snowbeds. However, the presence of neighbours reduced herbivory and thereby also facilitated survival. For plant growth the facilitative effects against herbivores in snowbeds counterbalanced competition for resources, leading to a weak negative net effect. Overall the neighbour effects were not species-specific and did not change with snowmelt date. Our finding of counterbalancing effects of competition and facilitation within a plant community is of special theoretical value for species distribution models and can explain the success of models that give primary importance to abiotic factors and tend to overlook interrelations between biotic and abiotic effects on plants.
Resumo:
Negative biases in implicit self-evaluation are thought to be detrimental to subjective well-being and have been linked to various psychological disorders, including depression. An understanding of the neural processes underlying implicit self-evaluation in healthy subjects could provide a basis for the investigation of negative biases in depressed patients, the development of differential psychotherapeutic interventions, and the estimation of relapse risk in remitted patients. We thus studied the brain processes linked to implicit self-evaluation in 25 healthy subjects using event-related potential (ERP) recording during a self-relevant Implicit Association Test (sIAT). Consistent with a positive implicit self-evaluation in healthy subjects, they responded significantly faster to the congruent (self-positive mapping) than to the incongruent sIAT condition (self-negative mapping). Our main finding was a topographical ERP difference in a time window between 600 and 700 ms, whereas no significant differences between congruent and incongruent conditions were observed in earlier time windows. This suggests that biases in implicit self-evaluation are reflected only indirectly, in the additional recruitment of control processes needed to override the positive implicit self-evaluation of healthy subjects in the incongruent sIAT condition. Brain activations linked to these control processes can thus serve as an indirect measure for estimating biases in implicit self-evaluation. The sIAT paradigm, combined with ERP, could therefore permit the tracking of the neural processes underlying implicit self-evaluation in depressed patients during psychotherapy.
Resumo:
In this paper we propose methods for smooth hazard estimation of a time variable where that variable is interval censored. These methods allow one to model the transformed hazard in terms of either smooth (smoothing splines) or linear functions of time and other relevant time varying predictor variables. We illustrate the use of this method on a dataset of hemophiliacs where the outcome, time to seroconversion for HIV, is interval censored and left-truncated.
Resumo:
We combined two techniques, radiolabeled aerosol inhalation delivery and induced sputum, to examine in vivo the time course of particle uptake by airway macrophages in 10 healthy volunteers. On three separate visits, induced sputum was obtained 40, 100, and 160 min after inhalation of radiolabeled sulfur colloid (SC) aerosol (Tc99 m-SC, 0.2 microm colloid size delivered in 6-microm droplets). On a fourth visit (control) with no SC inhalation, induced sputum was obtained and SC particles were incubated (37 degrees C) in vitro with sputum cells for 40, 100, and 160 min (matching the times associated with in vivo sampling). Total and differential cell counts were recorded for each sputum sample. Compared with 40 min (6 +/- 3%), uptake in vivo was significantly elevated at 100 (31 +/- 5%) and 160 min (27 +/- 4%); both were strongly associated with the number of airway macrophages (R = 0.8 and 0.7, respectively); and the number and proportion of macrophages at 40 min were significantly (P < 0.05) elevated compared with control (1,248 +/- 256 versus 555 +/- 114 cells/mg; 76 +/- 6% versus 60 +/- 5%). Uptake in vitro increased in a linear fashion over time and was maximal at 160 min (40 min, 12 +/- 2%; 100 min, 16 +/- 4%; 160 min, 24 +/- 6%). These data suggest that airway surface macrophages in healthy subjects rapidly engulf insoluble particles. Further, macrophage recruitment and phagocytosis-modifying agents are factors in vivo that likely affect particle uptake and its time course.
Resumo:
BACKGROUND: The Anesthetic Conserving Device (AnaConDa) uncouples delivery of a volatile anesthetic (VA) from fresh gas flow (FGF) using a continuous infusion of liquid volatile into a modified heat-moisture exchanger capable of adsorbing VA during expiration and releasing adsorbed VA during inspiration. It combines the simplicity and responsiveness of high FGF with low agent expenditures. We performed in vitro characterization of the device before developing a population pharmacokinetic model for sevoflurane administration with the AnaConDa, and retrospectively testing its performance (internal validation). MATERIALS AND METHODS: Eighteen females and 20 males, aged 31-87, BMI 20-38, were included. The end-tidal concentrations were varied and recorded together with the VA infusion rates into the device, ventilation and demographic data. The concentration-time course of sevoflurane was described using linear differential equations, and the most suitable structural model and typical parameter values were identified. The individual pharmacokinetic parameters were obtained and tested for covariate relationships. Prediction errors were calculated. RESULTS: In vitro studies assessed the contribution of the device to the pharmacokinetic model. In vivo, the sevoflurane concentration-time courses on the patient side of the AnaConDa were adequately described with a two-compartment model. The population median absolute prediction error was 27% (interquartile range 13-45%). CONCLUSION: The predictive performance of the two-compartment model was similar to that of models accepted for TCI administration of intravenous anesthetics, supporting open-loop administration of sevoflurane with the AnaConDa. Further studies will focus on prospective testing and external validation of the model implemented in a target-controlled infusion device.
Resumo:
Rationale: Focal onset epileptic seizures are due to abnormal interactions between distributed brain areas. By estimating the cross-correlation matrix of multi-site intra-cerebral EEG recordings (iEEG), one can quantify these interactions. To assess the topology of the underlying functional network, the binary connectivity matrix has to be derived from the cross-correlation matrix by use of a threshold. Classically, a unique threshold is used that constrains the topology [1]. Our method aims to set the threshold in a data-driven way by separating genuine from random cross-correlation. We compare our approach to the fixed threshold method and study the dynamics of the functional topology. Methods: We investigate the iEEG of patients suffering from focal onset seizures who underwent evaluation for the possibility of surgery. The equal-time cross-correlation matrices are evaluated using a sliding time window. We then compare 3 approaches assessing the corresponding binary networks. For each time window: * Our parameter-free method derives from the cross-correlation strength matrix (CCS)[2]. It aims at disentangling genuine from random correlations (due to finite length and varying frequency content of the signals). In practice, a threshold is evaluated for each pair of channels independently, in a data-driven way. * The fixed mean degree (FMD) uses a unique threshold on the whole connectivity matrix so as to ensure a user defined mean degree. * The varying mean degree (VMD) uses the mean degree of the CCS network to set a unique threshold for the entire connectivity matrix. * Finally, the connectivity (c), connectedness (given by k, the number of disconnected sub-networks), mean global and local efficiencies (Eg, El, resp.) are computed from FMD, CCS, VMD, and their corresponding random and lattice networks. Results: Compared to FMD and VMD, CCS networks present: *topologies that are different in terms of c, k, Eg and El. *from the pre-ictal to the ictal and then post-ictal period, topological features time courses that are more stable within a period, and more contrasted from one period to the next. For CCS, pre-ictal connectivity is low, increases to a high level during the seizure, then decreases at offset. k shows a ‘‘U-curve’’ underlining the synchronization of all electrodes during the seizure. Eg and El time courses fluctuate between the corresponding random and lattice networks values in a reproducible manner. Conclusions: The definition of a data-driven threshold provides new insights into the topology of the epileptic functional networks.
Resumo:
BACKGROUND AND PURPOSE Inverse relationship between onset-to-door time (ODT) and door-to-needle time (DNT) in stroke thrombolysis was reported from various registries. We analyzed this relationship and other determinants of DNT in dedicated stroke centers. METHODS Prospectively collected data of consecutive ischemic stroke patients from 10 centers who received IV thrombolysis within 4.5 hours from symptom onset were merged (n=7106). DNT was analyzed as a function of demographic and prehospital variables using regression analyses, and change over time was considered. RESULTS In 6348 eligible patients with known treatment delays, median DNT was 42 minutes and kept decreasing steeply every year (P<0.001). Median DNT of 55 minutes was observed in patients with ODT ≤30 minutes, whereas it declined for patients presenting within the last 30 minutes of the 3-hour time window (median, 33 minutes) and of the 4.5-hour time window (20 minutes). For ODT within the first 30 minutes of the extended time window (181-210 minutes), DNT increased to 42 minutes. DNT was stable for ODT for 30 to 150 minutes (40-45 minutes). We found a weak inverse overall correlation between ODT and DNT (R(2)=-0.12; P<0.001), but it was strong in patients treated between 3 and 4.5 hours (R(2)=-0.75; P<0.001). ODT was independently inversely associated with DNT (P<0.001) in regression analysis. Octogenarians and women tended to have longer DNT. CONCLUSIONS DNT was decreasing steeply over the last years in dedicated stroke centers; however, significant oscillations of in-hospital treatment delays occurred at both ends of the time window. This suggests that further improvements can be achieved, particularly in the elderly.
Resumo:
This work investigates the performance of cardiorespiratory analysis detecting periodic breathing (PB) in chest wall recordings in mountaineers climbing to extreme altitude. The breathing patterns of 34 mountaineers were monitored unobtrusively by inductance plethysmography, ECG and pulse oximetry using a portable recorder during climbs at altitudes between 4497 and 7546 m on Mt. Muztagh Ata. The minute ventilation (VE) and heart rate (HR) signals were studied, to identify visually scored PB, applying time-varying spectral, coherence and entropy analysis. In 411 climbing periods, 30-120 min in duration, high values of mean power (MP(VE)) and slope (MSlope(VE)) of the modulation frequency band of VE, accurately identified PB, with an area under the ROC curve of 88 and 89%, respectively. Prolonged stay at altitude was associated with an increase in PB. During PB episodes, higher peak power of ventilatory (MP(VE)) and cardiac (MP(LF)(HR) ) oscillations and cardiorespiratory coherence (MP(LF)(Coher)), but reduced ventilation entropy (SampEn(VE)), was observed. Therefore, the characterization of cardiorespiratory dynamics by the analysis of VE and HR signals accurately identifies PB and effects of altitude acclimatization, providing promising tools for investigating physiologic effects of environmental exposures and diseases.
Resumo:
Linear- and unimodal-based inference models for mean summer temperatures (partial least squares, weighted averaging, and weighted averaging partial least squares models) were applied to a high-resolution pollen and cladoceran stratigraphy from Gerzensee, Switzerland. The time-window of investigation included the Allerød, the Younger Dryas, and the Preboreal. Characteristic major and minor oscillations in the oxygen-isotope stratigraphy, such as the Gerzensee oscillation, the onset and end of the Younger Dryas stadial, and the Preboreal oscillation, were identified by isotope analysis of bulk-sediment carbonates of the same core and were used as independent indicators for hemispheric or global scale climatic change. In general, the pollen-inferred mean summer temperature reconstruction using all three inference models follows the oxygen-isotope curve more closely than the cladoceran curve. The cladoceran-inferred reconstruction suggests generally warmer summers than the pollen-based reconstructions, which may be an effect of terrestrial vegetation not being in equilibrium with climate due to migrational lags during the Late Glacial and early Holocene. Allerød summer temperatures range between 11 and 12°C based on pollen, whereas the cladoceran-inferred temperatures lie between 11 and 13°C. Pollen and cladocera-inferred reconstructions both suggest a drop to 9–10°C at the beginning of the Younger Dryas. Although the Allerød–Younger Dryas transition lasted 150–160 years in the oxygen-isotope stratigraphy, the pollen-inferred cooling took 180–190 years and the cladoceran-inferred cooling lasted 250–260 years. The pollen-inferred summer temperature rise to 11.5–12°C at the transition from the Younger Dryas to the Preboreal preceded the oxygen-isotope signal by several decades, whereas the cladoceran-inferred warming lagged. Major discrepancies between the pollen- and cladoceran-inference models are observed for the Preboreal, where the cladoceran-inference model suggests mean summer temperatures of up to 14–15°C. Both pollen- and cladoceran-inferred reconstructions suggest a cooling that may be related to the Gerzensee oscillation, but there is no evidence for a cooling synchronous with the Preboreal oscillation as recorded in the oxygen-isotope record. For the Gerzensee oscillation the inferred cooling was ca. 1 and 0.5°C based on pollen and cladocera, respectively, which lies well within the inherent prediction errors of the inference models.