951 resultados para Upper bound estimate
A Phase Space Box-counting based Method for Arrhythmia Prediction from Electrocardiogram Time Series
Resumo:
Arrhythmia is one kind of cardiovascular diseases that give rise to the number of deaths and potentially yields immedicable danger. Arrhythmia is a life threatening condition originating from disorganized propagation of electrical signals in heart resulting in desynchronization among different chambers of the heart. Fundamentally, the synchronization process means that the phase relationship of electrical activities between the chambers remains coherent, maintaining a constant phase difference over time. If desynchronization occurs due to arrhythmia, the coherent phase relationship breaks down resulting in chaotic rhythm affecting the regular pumping mechanism of heart. This phenomenon was explored by using the phase space reconstruction technique which is a standard analysis technique of time series data generated from nonlinear dynamical system. In this project a novel index is presented for predicting the onset of ventricular arrhythmias. Analysis of continuously captured long-term ECG data recordings was conducted up to the onset of arrhythmia by the phase space reconstruction method, obtaining 2-dimensional images, analysed by the box counting method. The method was tested using the ECG data set of three different kinds including normal (NR), Ventricular Tachycardia (VT), Ventricular Fibrillation (VF), extracted from the Physionet ECG database. Statistical measures like mean (μ), standard deviation (σ) and coefficient of variation (σ/μ) for the box-counting in phase space diagrams are derived for a sliding window of 10 beats of ECG signal. From the results of these statistical analyses, a threshold was derived as an upper bound of Coefficient of Variation (CV) for box-counting of ECG phase portraits which is capable of reliably predicting the impeding arrhythmia long before its actual occurrence. As future work of research, it was planned to validate this prediction tool over a wider population of patients affected by different kind of arrhythmia, like atrial fibrillation, bundle and brunch block, and set different thresholds for them, in order to confirm its clinical applicability.
Resumo:
We present new algorithms to approximate the discrete volume of a polyhedral geometry using boxes defined by the US standard SAE J1100. This problem is NP-hard and has its main application in the car design process. The algorithms produce maximum weighted independent sets on a so-called conflict graph for a discretisation of the geometry. We present a framework to eliminate a large portion of the vertices of a graph without affecting the quality of the optimal solution. Using this framework we are also able to define the conflict graph without the use of a discretisation. For the solution of the maximum weighted independent set problem we designed an enumeration scheme which uses the restrictions of the SAE J1100 standard for an efficient upper bound computation. We evaluate the packing algorithms according to the solution quality compared to manually derived results. Finally, we compare our enumeration scheme to several other exact algorithms in terms of their runtime. Grid-based packings either tend to be not tight or have intersections between boxes. We therefore present an algorithm which can compute box packings with arbitrary placements and fixed orientations. In this algorithm we make use of approximate Minkowski Sums, computed by uniting many axis-oriented equal boxes. We developed an algorithm which computes the union of equal axis-oriented boxes efficiently. This algorithm also maintains the Minkowski Sums throughout the packing process. We also extend these algorithms for packing arbitrary objects in fixed orientations.
Resumo:
The subject of the presented thesis is the accurate measurement of time dilation, aiming at a quantitative test of special relativity. By means of laser spectroscopy, the relativistic Doppler shifts of a clock transition in the metastable triplet spectrum of ^7Li^+ are simultaneously measured with and against the direction of motion of the ions. By employing saturation or optical double resonance spectroscopy, the Doppler broadening as caused by the ions' velocity distribution is eliminated. From these shifts both time dilation as well as the ion velocity can be extracted with high accuracy allowing for a test of the predictions of special relativity. A diode laser and a frequency-doubled titanium sapphire laser were set up for antiparallel and parallel excitation of the ions, respectively. To achieve a robust control of the laser frequencies required for the beam times, a redundant system of frequency standards consisting of a rubidium spectrometer, an iodine spectrometer, and a frequency comb was developed. At the experimental section of the ESR, an automated laser beam guiding system for exact control of polarisation, beam profile, and overlap with the ion beam, as well as a fluorescence detection system were built up. During the first experiments, the production, acceleration and lifetime of the metastable ions at the GSI heavy ion facility were investigated for the first time. The characterisation of the ion beam allowed for the first time to measure its velocity directly via the Doppler effect, which resulted in a new improved calibration of the electron cooler. In the following step the first sub-Doppler spectroscopy signals from an ion beam at 33.8 %c could be recorded. The unprecedented accuracy in such experiments allowed to derive a new upper bound for possible higher-order deviations from special relativity. Moreover future measurements with the experimental setup developed in this thesis have the potential to improve the sensitivity to low-order deviations by at least one order of magnitude compared to previous experiments; and will thus lead to a further contribution to the test of the standard model.
Resumo:
This dissertation studies the geometric static problem of under-constrained cable-driven parallel robots (CDPRs) supported by n cables, with n ≤ 6. The task consists of determining the overall robot configuration when a set of n variables is assigned. When variables relating to the platform posture are assigned, an inverse geometric static problem (IGP) must be solved; whereas, when cable lengths are given, a direct geometric static problem (DGP) must be considered. Both problems are challenging, as the robot continues to preserve some degrees of freedom even after n variables are assigned, with the final configuration determined by the applied forces. Hence, kinematics and statics are coupled and must be resolved simultaneously. In this dissertation, a general methodology is presented for modelling the aforementioned scenario with a set of algebraic equations. An elimination procedure is provided, aimed at solving the governing equations analytically and obtaining a least-degree univariate polynomial in the corresponding ideal for any value of n. Although an analytical procedure based on elimination is important from a mathematical point of view, providing an upper bound on the number of solutions in the complex field, it is not practical to compute these solutions as it would be very time-consuming. Thus, for the efficient computation of the solution set, a numerical procedure based on homotopy continuation is implemented. A continuation algorithm is also applied to find a set of robot parameters with the maximum number of real assembly modes for a given DGP. Finally, the end-effector pose depends on the applied load and may change due to external disturbances. An investigation into equilibrium stability is therefore performed.
Resumo:
Gas separation membranes of high CO2 permeability and selectivity have great potential in both natural gas sweetening and carbon dioxide capture. Many modified PIM membranes results permselectivity above Robinson upper bound. The big problem that should be solved for these polymers to be commercialized is their aging through time. In high glassy polymeric membrane such as PIM-1 and its modifications, solubility selectivity has more contribution towards permselectivity than diffusivity selectivity. So in this thesis work pure and mixed gas sorption behavior of carbon dioxide and methane in three PIM-based membranes (PIM-1, TZPIM-1 and AO-PIM-1) and Polynonene membrane is rigorously studied. Sorption experiment is performed at different temperatures and molar fraction. Sorption isotherms found from the experiment shows that there is a decrease of solubility as the temperature of the experiment increases for both gases in all polymers. There is also a decrease of solubility due to the presence of the other gas in the system in the mixed gas experiments due to competitive sorption effect. Variation of solubility is more visible in methane sorption than carbon dioxide, which will make the mixed gas solubility selectivity higher than that of pure gas solubility selectivity. Modeling of the system using NELF and Dual mode sorption model estimates the experimental results correctly Sorption of gases in heat treated and untreated membranes show that the sorption isotherms don’t vary due to the application of heat treatment for both carbon dioxide and methane. But there is decrease in the diffusivity coefficient and permeability of pure gases due to heat treatment. Both diffusivity coefficient and permeability decreases with increasing of heat treatment temperature. Diffusivity coefficient calculated from transient sorption experiment and steady state permeability experiment is also compared in this thesis work. The results reveal that transient diffusivity coefficient is higher than steady state diffusivity selectivity.
Resumo:
Sovereign ratings have only recently regained attention in the academic debate. This seems to be somewhat surprising against the background that their influence is well-known and that rating decisions have often been criticized in the past (as for example during the Asian crisis in the 90s). Sovereign ratings do not only assess the creditworthiness of governments: They are also included in the calculation of ratings for sub-sovereign issuers whereby their rating is usually restricted to the upper bound of the sovereign rating (sovereign ceiling). Earlier studies have also shown that the downgrade of a sovereign often leads to contagion effects on neighbor countries. This study focuses first on misleading incentives in the rating industry before chapter three summarizes the literature on the influence and determinants of sovereign ratings. The fourth chapter explores empirically how ratings respond to changes in sovereign debt across specific country groups. The fifth part focuses on single rating decisions of four selected rating agencies and investigates whether the timing of decisions gives reason for herding behavior. The final chapter presents a reform proposal for the future regulation of the rating industry in light of the aforementioned flaws.rn
Resumo:
Random access (RA) protocols are normally used in a satellite networks for initial terminal access and are particularly effective since no coordination is required. On the other hand, contention resolution diversity slotted Aloha (CRDSA), irregular repetition slotted Aloha (IRSA) and coded slotted Aloha (CSA) has shown to be more efficient than classic RA schemes as slotted Aloha, and can be exploited also when short packets transmissions are done over a shared medium. In particular, they relies on burst repetition and on successive interference cancellation (SIC) applied at the receiver. The SIC process can be well described using a bipartite graph representation and exploiting tools used for analyze iterative decoding. The scope of my Master Thesis has been to described the performance of such RA protocols when the Rayleigh fading is taken into account. In this context, each user has the ability to correctly decode a packet also in presence of collision and when SIC is considered this may result in multi-packet reception. Analysis of the SIC procedure under Rayleigh fading has been analytically derived for the asymptotic case (infinite frame length), helping the analysis of both throughput and packet loss rates. An upper bound of the achievable performance has been analytically obtained. It can be show that in particular channel conditions the throughput of the system can be greater than one packets per slot which is the theoretical limit of the Collision Channel case.
Resumo:
In 1983, M. van den Berg made his Fundamental Gap Conjecture about the difference between the first two Dirichlet eigenvalues (the fundamental gap) of any convex domain in the Euclidean plane. Recently, progress has been made in the case where the domains are polygons and, in particular, triangles. We examine the conjecture for triangles in hyperbolic geometry, though we seek an for an upper bound for the fundamental gap rather than a lower bound.
Resumo:
Steel tubular cast-in-place pilings are used throughout the country for many different project types. These piles are a closed-end pipe with varying wall thicknesses and outer diameters, that are driven to depth and then the core is filled with concrete. These piles are typically used for smaller bridges, or secondary structures. Mostly the piling is designed based on a resistance based method which is a function of the soil properties of which the pile is driven through, however there is a structural capacity of these members that is considered to be the upper bound on the loading of the member. This structural capacity is given by the AASHTO LRFD (2010), with two methods. These two methods are based on a composite or non-composite section. Many state agencies and corporations use the non-composite equation because it is requires much less computation and is known to be conservative. However with the trends of the time, more and more structural elements are being investigated to determine ways to better understand the mechanics of the members, which could lead to more efficient and safer designs. In this project, a set of these piling are investigated. The way the cross section reacts to several different loading conditions, along with a more detailed observation of the material properties is considered as part of this research. The evaluation consisted of testing stub sections of pile with varying sizes (10-¾”, 12-¾”), wall thicknesses (0.375”, 0.5”), and testing methods (whole compression, composite compression, push through, core sampling). These stub sections were chosen as they would represent a similar bracing length to many different soils. In addition, a finite element model was developed using ANSYS to predict the strains from the testing of the pile cross sections. This model was able to simulate the strains from most of the loading conditions and sizes that were tested. The bond between the steel shell and the concrete core, along with the concrete strength through the depth of the cross section were some of the material properties of these sections that were investigated.
Resumo:
Intermediaries permeate modern economic exchange. Most classical models on intermediated exchange are driven by information asymmetry and inventory management. These two factors are of reduced significance in modern economies. This makes it necessary to develop models that correspond more closely to modern financial marketplaces. The goal of this dissertation is to propose and examine such models in a game theoretical context. The proposed models are driven by asymmetries in the goals of different market participants. Hedging pressure as one of the most critical aspects in the behavior of commercial entities plays a crucial role. The first market model shows that no equilibrium solution can exist in a market consisting of a commercial buyer, a commercial seller and a non-commercial intermediary. This indicates a clear economic need for non-commercial trading intermediaries: a direct trade from seller to buyer does not result in an equilibrium solution. The second market model has two distinct intermediaries between buyer and seller: a spread trader/market maker and a risk-neutral intermediary. In this model a unique, natural equilibrium solution is identified in which the supply-demand surplus is traded by the risk-neutral intermediary, whilst the market maker trades the remainder from seller to buyer. Since the market maker’s payoff for trading at the identified equilibrium price is zero, this second model does not provide any motivation for the market maker to enter the market. The third market model introduces an explicit transaction fee that enables the market maker to secure a positive payoff. Under certain assumptions on this transaction fee the equilibrium solution of the previous model applies and now also provides a financial motivation for the market maker to enter the market. If the transaction fee violates an upper bound that depends on supply, demand and riskaversity of buyer and seller, the market will be in disequilibrium.
Resumo:
A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.
Resumo:
We consider the 2d XY Model with topological lattice actions, which are invariant against small deformations of the field configuration. These actions constrain the angle between neighbouring spins by an upper bound, or they explicitly suppress vortices (and anti-vortices). Although topological actions do not have a classical limit, they still lead to the universal behaviour of the Berezinskii-Kosterlitz-Thouless (BKT) phase transition — at least up to moderate vortex suppression. In the massive phase, the analytically known Step Scaling Function (SSF) is reproduced in numerical simulations. However, deviations from the expected universal behaviour of the lattice artifacts are observed. In the massless phase, the BKT value of the critical exponent ηc is confirmed. Hence, even though for some topological actions vortices cost zero energy, they still drive the standard BKT transition. In addition we identify a vortex-free transition point, which deviates from the BKT behaviour.
Resumo:
Storing and recalling spiking sequences is a general problem the brain needs to solve. It is, however, unclear what type of biologically plausible learning rule is suited to learn a wide class of spatiotemporal activity patterns in a robust way. Here we consider a recurrent network of stochastic spiking neurons composed of both visible and hidden neurons. We derive a generic learning rule that is matched to the neural dynamics by minimizing an upper bound on the Kullback–Leibler divergence from the target distribution to the model distribution. The derived learning rule is consistent with spike-timing dependent plasticity in that a presynaptic spike preceding a postsynaptic spike elicits potentiation while otherwise depression emerges. Furthermore, the learning rule for synapses that target visible neurons can be matched to the recently proposed voltage-triplet rule. The learning rule for synapses that target hidden neurons is modulated by a global factor, which shares properties with astrocytes and gives rise to testable predictions.
Resumo:
We introduce a version of operational set theory, OST−, without a choice operation, which has a machinery for Δ0Δ0 separation based on truth functions and the separation operator, and a new kind of applicative set theory, so-called weak explicit set theory WEST, based on Gödel operations. We show that both the theories and Kripke–Platek set theory KPKP with infinity are pairwise Π1Π1 equivalent. We also show analogous assertions for subtheories with ∈-induction restricted in various ways and for supertheories extended by powerset, beta, limit and Mahlo operations. Whereas the upper bound is given by a refinement of inductive definition in KPKP, the lower bound is by a combination, in a specific way, of realisability, (intuitionistic) forcing and negative interpretations. Thus, despite interpretability between classical theories, we make “a detour via intuitionistic theories”. The combined interpretation, seen as a model construction in the sense of Visser's miniature model theory, is a new way of construction for classical theories and could be said the third kind of model construction ever used which is non-trivial on the logical connective level, after generic extension à la Cohen and Krivine's classical realisability model.
Resumo:
In this paper we continue Feferman’s unfolding program initiated in (Feferman, vol. 6 of Lecture Notes in Logic, 1996) which uses the concept of the unfolding U(S) of a schematic system S in order to describe those operations, predicates and principles concerning them, which are implicit in the acceptance of S. The program has been carried through for a schematic system of non-finitist arithmetic NFA in Feferman and Strahm (Ann Pure Appl Log, 104(1–3):75–96, 2000) and for a system FA (with and without Bar rule) in Feferman and Strahm (Rev Symb Log, 3(4):665–689, 2010). The present contribution elucidates the concept of unfolding for a basic schematic system FEA of feasible arithmetic. Apart from the operational unfolding U0(FEA) of FEA, we study two full unfolding notions, namely the predicate unfolding U(FEA) and a more general truth unfolding UT(FEA) of FEA, the latter making use of a truth predicate added to the language of the operational unfolding. The main results obtained are that the provably convergent functions on binary words for all three unfolding systems are precisely those being computable in polynomial time. The upper bound computations make essential use of a specific theory of truth TPT over combinatory logic, which has recently been introduced in Eberhard and Strahm (Bull Symb Log, 18(3):474–475, 2012) and Eberhard (A feasible theory of truth over combinatory logic, 2014) and whose involved proof-theoretic analysis is due to Eberhard (A feasible theory of truth over combinatory logic, 2014). The results of this paper were first announced in (Eberhard and Strahm, Bull Symb Log 18(3):474–475, 2012).