916 resultados para Failure time analysis
Resumo:
Online paper web analysis relies on traversing scanners that criss-cross on top of a rapidly moving paper web. The sensors embedded in the scanners measure many important quality variables of paper, such as basis weight, caliper and porosity. Most of these quantities are varying a lot and the measurements are noisy at many different scales. The zigzagging nature of scanning makes it difficult to separate machine direction (MD) and cross direction (CD) variability from one another. For improving the 2D resolution of the quality variables above, the paper quality control team at the Department of Mathematics and Physics at LUT has implemented efficient Kalman filtering based methods that currently use 2D Fourier series. Fourier series are global and therefore resolve local spatial detail on the paper web rather poorly. The target of the current thesis is to study alternative wavelet based representations as candidates to replace the Fourier basis for a higher resolution spatial reconstruction of these quality variables. The accuracy of wavelet compressed 2D web fields will be compared with corresponding truncated Fourier series based fields.
Resumo:
BACKGROUND & AIMS: The prognostic value of the different causes of renal failure in cirrhosis is not well established. This study investigated the predictive value of the cause of renal failure in cirrhosis. METHODS: Five hundred sixty-two consecutive patients with cirrhosis and renal failure (as defined by serum creatinine 1.5 mg/dL on 2 successive determinations within 48 hours) hospitalized over a 6-year period in a single institution were included in a prospective study. The cause of renal failure was classified into 4 groups: renal failure associated with bacterial infections, renal failure associated with volume depletion, hepatorenal syndrome (HRS), and parenchymal nephropathy. The primary end point was survival at 3 months. RESULTS: Four hundred sixty-three patients (82.4%) had renal failure that could be classified in 1 of 4 groups. The most frequent was renal failure associated with infections (213 cases; 46%), followed by hypovolemia-associated renal failure (149; 32%), HRS (60; 13%), and parenchymal nephropathy (41; 9%). The remaining patients had a combination of causes or miscellaneous conditions. Prognosis was markedly different according to cause of renal failure, 3-month probability of survival being 73% for parenchymal nephropathy, 46% for hypovolemia-associated renal failure, 31% for renal failure associated with infections, and 15% for HRS (P .0005). In a multivariate analysis adjusted for potentially confounding variables, cause of renal failure was independently associated with prognosis, together with MELD score, serum sodium, and hepatic encephalopathy at time of diagnosis of renal failure. CONCLUSIONS: A simple classification of patients with cirrhosis according to cause of renal failure is useful in assessment of prognosis and may help in decision making in liver transplantation.
Resumo:
BACKGROUND AND OBJECTIVES: Sudden cardiac death (SCD) is a severe burden of modern medicine. Aldosterone antagonist is publicized as effective in reducing mortality in patients with heart failure (HF) or post myocardial infarction (MI). Our study aimed to assess the efficacy of AAs on mortality including SCD, hospitalization admission and several common adverse effects. METHODS: We searched Embase, PubMed, Web of Science, Cochrane library and clinicaltrial.gov for randomized controlled trials (RCTs) assigning AAs in patients with HF or post MI through May 2015. The comparator included standard medication or placebo, or both. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed. Event rates were compared using a random effects model. Prospective RCTs of AAs with durations of at least 8 weeks were selected if they included at least one of the following outcomes: SCD, all-cause/cardiovascular mortality, all-cause/cardiovascular hospitalization and common side effects (hyperkalemia, renal function degradation and gynecomastia). RESULTS: Data from 19,333 patients enrolled in 25 trials were included. In patients with HF, this treatment significantly reduced the risk of SCD by 19% (RR 0.81; 95% CI, 0.67-0.98; p = 0.03); all-cause mortality by 19% (RR 0.81; 95% CI, 0.74-0.88, p<0.00001) and cardiovascular death by 21% (RR 0.79; 95% CI, 0.70-0.89, p<0.00001). In patients with post-MI, the matching reduced risks were 20% (RR 0.80; 95% CI, 0.66-0.98; p = 0.03), 15% (RR 0.85; 95% CI, 0.76-0.95, p = 0.003) and 17% (RR 0.83; 95% CI, 0.74-0.94, p = 0.003), respectively. Concerning both subgroups, the relative risks respectively decreased by 19% (RR 0.81; 95% CI, 0.71-0.92; p = 0.002) for SCD, 18% (RR 0.82; 95% CI, 0.77-0.88, p < 0.0001) for all-cause mortality and 20% (RR 0.80; 95% CI, 0.74-0.87, p < 0.0001) for cardiovascular mortality in patients treated with AAs. As well, hospitalizations were significantly reduced, while common adverse effects were significantly increased. CONCLUSION: Aldosterone antagonists appear to be effective in reducing SCD and other mortality events, compared with placebo or standard medication in patients with HF and/or after a MI.
Resumo:
This thesis develops a comprehensive and a flexible statistical framework for the analysis and detection of space, time and space-time clusters of environmental point data. The developed clustering methods were applied in both simulated datasets and real-world environmental phenomena; however, only the cases of forest fires in Canton of Ticino (Switzerland) and in Portugal are expounded in this document. Normally, environmental phenomena can be modelled as stochastic point processes where each event, e.g. the forest fire ignition point, is characterised by its spatial location and occurrence in time. Additionally, information such as burned area, ignition causes, landuse, topographic, climatic and meteorological features, etc., can also be used to characterise the studied phenomenon. Thereby, the space-time pattern characterisa- tion represents a powerful tool to understand the distribution and behaviour of the events and their correlation with underlying processes, for instance, socio-economic, environmental and meteorological factors. Consequently, we propose a methodology based on the adaptation and application of statistical and fractal point process measures for both global (e.g. the Morisita Index, the Box-counting fractal method, the multifractal formalism and the Ripley's K-function) and local (e.g. Scan Statistics) analysis. Many measures describing the space-time distribution of environmental phenomena have been proposed in a wide variety of disciplines; nevertheless, most of these measures are of global character and do not consider complex spatial constraints, high variability and multivariate nature of the events. Therefore, we proposed an statistical framework that takes into account the complexities of the geographical space, where phenomena take place, by introducing the Validity Domain concept and carrying out clustering analyses in data with different constrained geographical spaces, hence, assessing the relative degree of clustering of the real distribution. Moreover, exclusively to the forest fire case, this research proposes two new methodologies to defining and mapping both the Wildland-Urban Interface (WUI) described as the interaction zone between burnable vegetation and anthropogenic infrastructures, and the prediction of fire ignition susceptibility. In this regard, the main objective of this Thesis was to carry out a basic statistical/- geospatial research with a strong application part to analyse and to describe complex phenomena as well as to overcome unsolved methodological problems in the characterisation of space-time patterns, in particular, the forest fire occurrences. Thus, this Thesis provides a response to the increasing demand for both environmental monitoring and management tools for the assessment of natural and anthropogenic hazards and risks, sustainable development, retrospective success analysis, etc. The major contributions of this work were presented at national and international conferences and published in 5 scientific journals. National and international collaborations were also established and successfully accomplished. -- Cette thèse développe une méthodologie statistique complète et flexible pour l'analyse et la détection des structures spatiales, temporelles et spatio-temporelles de données environnementales représentées comme de semis de points. Les méthodes ici développées ont été appliquées aux jeux de données simulées autant qu'A des phénomènes environnementaux réels; nonobstant, seulement le cas des feux forestiers dans le Canton du Tessin (la Suisse) et celui de Portugal sont expliqués dans ce document. Normalement, les phénomènes environnementaux peuvent être modélisés comme des processus ponctuels stochastiques ou chaque événement, par ex. les point d'ignition des feux forestiers, est déterminé par son emplacement spatial et son occurrence dans le temps. De plus, des informations tels que la surface bru^lée, les causes d'ignition, l'utilisation du sol, les caractéristiques topographiques, climatiques et météorologiques, etc., peuvent aussi être utilisées pour caractériser le phénomène étudié. Par conséquent, la définition de la structure spatio-temporelle représente un outil puissant pour compren- dre la distribution du phénomène et sa corrélation avec des processus sous-jacents tels que les facteurs socio-économiques, environnementaux et météorologiques. De ce fait, nous proposons une méthodologie basée sur l'adaptation et l'application de mesures statistiques et fractales des processus ponctuels d'analyse global (par ex. l'indice de Morisita, la dimension fractale par comptage de boîtes, le formalisme multifractal et la fonction K de Ripley) et local (par ex. la statistique de scan). Des nombreuses mesures décrivant les structures spatio-temporelles de phénomènes environnementaux peuvent être trouvées dans la littérature. Néanmoins, la plupart de ces mesures sont de caractère global et ne considèrent pas de contraintes spatiales com- plexes, ainsi que la haute variabilité et la nature multivariée des événements. A cet effet, la méthodologie ici proposée prend en compte les complexités de l'espace géographique ou le phénomène a lieu, à travers de l'introduction du concept de Domaine de Validité et l'application des mesures d'analyse spatiale dans des données en présentant différentes contraintes géographiques. Cela permet l'évaluation du degré relatif d'agrégation spatiale/temporelle des structures du phénomène observé. En plus, exclusif au cas de feux forestiers, cette recherche propose aussi deux nouvelles méthodologies pour la définition et la cartographie des zones périurbaines, décrites comme des espaces anthropogéniques à proximité de la végétation sauvage ou de la forêt, et de la prédiction de la susceptibilité à l'ignition de feu. A cet égard, l'objectif principal de cette Thèse a été d'effectuer une recherche statistique/géospatiale avec une forte application dans des cas réels, pour analyser et décrire des phénomènes environnementaux complexes aussi bien que surmonter des problèmes méthodologiques non résolus relatifs à la caractérisation des structures spatio-temporelles, particulièrement, celles des occurrences de feux forestières. Ainsi, cette Thèse fournit une réponse à la demande croissante de la gestion et du monitoring environnemental pour le déploiement d'outils d'évaluation des risques et des dangers naturels et anthro- pogéniques. Les majeures contributions de ce travail ont été présentées aux conférences nationales et internationales, et ont été aussi publiées dans 5 revues internationales avec comité de lecture. Des collaborations nationales et internationales ont été aussi établies et accomplies avec succès.
Resumo:
Raw measurement data does not always immediately convey useful information, but applying mathematical statistical analysis tools into measurement data can improve the situation. Data analysis can offer benefits like acquiring meaningful insight from the dataset, basing critical decisions on the findings, and ruling out human bias through proper statistical treatment. In this thesis we analyze data from an industrial mineral processing plant with the aim of studying the possibility of forecasting the quality of the final product, given by one variable, with a model based on the other variables. For the study mathematical tools like Qlucore Omics Explorer (QOE) and Sparse Bayesian regression (SB) are used. Later on, linear regression is used to build a model based on a subset of variables that seem to have most significant weights in the SB model. The results obtained from QOE show that the variable representing the desired final product does not correlate with other variables. For SB and linear regression, the results show that both SB and linear regression models built on 1-day averaged data seriously underestimate the variance of true data, whereas the two models built on 1-month averaged data are reliable and able to explain a larger proportion of variability in the available data, making them suitable for prediction purposes. However, it is concluded that no single model can fit well the whole available dataset and therefore, it is proposed for future work to make piecewise non linear regression models if the same available dataset is used, or the plant to provide another dataset that should be collected in a more systematic fashion than the present data for further analysis.
Resumo:
Identification of order of an Autoregressive Moving Average Model (ARMA) by the usual graphical method is subjective. Hence, there is a need of developing a technique to identify the order without employing the graphical investigation of series autocorrelations. To avoid subjectivity, this thesis focuses on determining the order of the Autoregressive Moving Average Model using Reversible Jump Markov Chain Monte Carlo (RJMCMC). The RJMCMC selects the model from a set of the models suggested by better fitting, standard deviation errors and the frequency of accepted data. Together with deep analysis of the classical Box-Jenkins modeling methodology the integration with MCMC algorithms has been focused through parameter estimation and model fitting of ARMA models. This helps to verify how well the MCMC algorithms can treat the ARMA models, by comparing the results with graphical method. It has been seen that the MCMC produced better results than the classical time series approach.
Resumo:
Comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry was used for the identification of forty doping agents. The improvement in the specificity was remarkable, allowing the resolution of analytes that could not be done by one-dimensional chromatographic systems. The sensitivity observed for different classes of prohibited substances was clearly below the value required by the World Anti-Doping Agency. In addition time-of-flight mass spectrometry gives full spectrum for all analytes without any interference from the matrix, resulting in selectivity improvements. These results could support the implementation of an exhaustive monitoring approach for hundreds of doping agents in a single injection.
Resumo:
Due to its non-storability, electricity must be produced at the same time that it is consumed, as a result prices are determined on an hourly basis and thus analysis becomes more challenging. Moreover, the seasonal fluctuations in demand and supply lead to a seasonal behavior of electricity spot prices. The purpose of this thesis is to seek and remove all causal effects from electricity spot prices and remain with pure prices for modeling purposes. To achieve this we use Qlucore Omics Explorer (QOE) for the visualization and the exploration of the data set and Time Series Decomposition method to estimate and extract the deterministic components from the series. To obtain the target series we use regression based on the background variables (water reservoir and temperature). The result obtained is three price series (for Sweden, Norway and System prices) with no apparent pattern.
Resumo:
Due to functional requirement of a structural detail brackets with and without scallop are frequently used in bridges, decks, ships and offshore structure. Scallops are designed to serve as passage way for fluids, to reduce weld length and plate distortions. Moreover, scallops are used to avoid intersection of two or more welds for the fact that there is the presence of inventible inherent initial crack except for full penetrated weld and the formation of multi-axial stress state at the weld intersection. Welding all around the scallop corner increase the possibility of brittle fracture even for the case the bracket is not loaded by primary load. Avoiding of scallop will establish an initial crack in the corner if bracket is welded by fillet welds. If the two weld run pass had crossed, this would have given a 3D residual stress situation. Therefore the presences and absence of scallop necessitates the 3D FEA fatigue resistance of both types of brackets using effective notch stress approach ( ). FEMAP 10.1 with NX NASTRAN was used for the 3D FEA. The first and main objective of this research was to investigate and compare the fatigue resistance of brackets with and without scallop. The secondary goal was the fatigue design of scallops in case they cannot be avoided for some reason. The fatigue resistance for both types of brackets was determined based on approach using 1 mm fictitiously rounded radius based on IIW recommendation. Identical geometrical, boundary and loading conditions were used for the determination and comparison of fatigue resistance of both types of brackets using linear 3D FEA. Moreover the size effect of bracket length was also studied using 2D SHELL element FEA. In the case of brackets with scallop the flange plate weld toe at the corner of the scallop was found to exhibit the highest and made the flange plate weld toe critical for fatigue failure. Whereas weld root and weld toe at the weld intersections were the highly stressed location for brackets without scallop. Thus weld toe for brackets with scallop, and weld root and weld toe for brackets without scallop were found to be the critical area for fatigue failure. Employing identical parameters on both types of brackets, brackets without scallop had the highest except for full penetrated weld. Furthermore the fatigue resistance of brackets without scallop was highly affected by the lack of weld penetration length and it was found out that decreased as the weld penetration was increased. Despite the fact that the very presence of scallop reduces the stiffness and also same time induce stress concentration, based on the 3D FEA it is worth concluding that using scallop provided better fatigue resistance when both types of brackets were fillet welded. However brackets without scallop had the highest fatigue resistance when full penetration weld was used. This thesis also showed that weld toe for brackets with scallop was the only highly stressed area unlike brackets without scallop in which both weld toe and weld root were the critical locations for fatigue failure when different types of boundary conditions were used. Weld throat thickness, plate thickness, scallop radius, lack of weld penetration length, boundary condition and weld quality affected the fatigue resistance of both types of brackets. And as a result, bracket design procedure, especially welding quality and post weld treatment techniques significantly affect the fatigue resistance of both type of brackets.
Resumo:
Multilevel converters provide an attractive solution to bring the benefits of speed-controlled rotational movement to high-power applications. Therefore, multilevel inverters have attracted wide interest in both the academic community and in the industry for the past two decades. In this doctoral thesis, modulation methods suitable especially for series connected H-bridge multilevel inverters are discussed. A concept of duty cycle modulation is presented and its modification is proposed. These methods are compared with other well-known modulation schemes, such as space-vector pulse width modulation and carrier-based modulation schemes. The advantage of the modified duty-cycle modulation is its algorithmic simplicity. A similar mathematical formulation for the original duty cycle modulation is proposed. The modified duty cycle modulation is shown to produce well-formed phase-to-neutral voltages that have lower total harmonic distortion than the space-vector pulse width modulation and the duty cycle modulation. The space-vector-based solution and the duty cycle modulation, on the other hand, result in a better-quality line-to-line voltage and current waveform. The voltage of the DC links in the modules of the series-connected H-bridge inverter are shown to fluctuate while they are under load. The fluctuation causes inaccuracies in the voltage production, which may result in a failure of the flux estimator in the controller. An extension for upper-level modulation schemes, which changes the switching instants of the inverter so that the output voltage meets the reference voltage accurately regardless of the DC link voltages, is proposed. The method is shown to reduce the error to a very low level when a sufficient switching frequency is used. An appropriate way to organize the switching instants of the multilevel inverter is to make only one-level steps at a time. This causes restrictions on the dynamical features of the modulation schemes. The produced voltage vector cannot be rotated several tens of degrees in a single switching period without violating the above-mentioned one-level-step rule. The dynamical capabilities of multilevel inverters are analyzed in this doctoral thesis, and it is shown that the multilevel inverters are capable of operating even in dynamically demanding metal industry applications. In addition to the discussion on modulation schemes, an overvoltage in multilevel converter drives caused by cable reflection is addressed. The voltage reflection phenomenon in drives with long feeder cables causes premature insulation deterioration and also affects the commonmode voltage, which is one of the main reasons for bearing currents. Bearing currents, on the other hand, cause fluting in the bearings, which results in premature bearing failure. The reflection phenomenon is traditionally prevented by filtering, but in this thesis, a modulationbased filterless method to mitigate the overvoltage in multilevel drives is proposed. Moreover, the mitigation method can be implemented as an extension for upper-level modulation schemes. The method exploits the oscillations caused by two consecutive voltage edges so that the sum of the oscillations results in a mitigated peak of the overvoltage. The applicability of the method is verified by simulations together with experiments with a full-scale prototype.
Resumo:
The amount of installed wind power has been growing exponentially during the past ten years. As wind turbines have become a significant source of electrical energy, the interactions between the turbines and the electric power network need to be studied more thoroughly than before. Especially, the behavior of the turbines in fault situations is of prime importance; simply disconnecting all wind turbines from the network during a voltage drop is no longer acceptable, since this would contribute to a total network collapse. These requirements have been a contributor to the increased role of simulations in the study and design of the electric drive train of a wind turbine. When planning a wind power investment, the selection of the site and the turbine are crucial for the economic feasibility of the installation. Economic feasibility, on the other hand, is the factor that determines whether or not investment in wind power will continue, contributing to green electricity production and reduction of emissions. In the selection of the installation site and the turbine (siting and site matching), the properties of the electric drive train of the planned turbine have so far been generally not been taken into account. Additionally, although the loss minimization of some of the individual components of the drive train has been studied, the drive train as a whole has received less attention. Furthermore, as a wind turbine will typically operate at a power level lower than the nominal most of the time, efficiency analysis in the nominal operating point is not sufficient. This doctoral dissertation attempts to combine the two aforementioned areas of interest by studying the applicability of time domain simulations in the analysis of the economicfeasibility of a wind turbine. The utilization of a general-purpose time domain simulator, otherwise applied to the study of network interactions and control systems, in the economic analysis of the wind energy conversion system is studied. The main benefits of the simulation-based method over traditional methods based on analytic calculation of losses include the ability to reuse and recombine existing models, the ability to analyze interactions between the components and subsystems in the electric drive train (something which is impossible when considering different subsystems as independent blocks, as is commonly done in theanalytical calculation of efficiencies), the ability to analyze in a rather straightforward manner the effect of selections other than physical components, for example control algorithms, and the ability to verify assumptions of the effects of a particular design change on the efficiency of the whole system. Based on the work, it can be concluded that differences between two configurations can be seen in the economic performance with only minor modifications to the simulation models used in the network interaction and control method study. This eliminates the need ofdeveloping analytic expressions for losses and enables the study of the system as a whole instead of modeling it as series connection of independent blocks with no lossinterdependencies. Three example cases (site matching, component selection, control principle selection) are provided to illustrate the usage of the approach and analyze its performance.
Resumo:
Modern machine structures are often fabricated by welding. From a fatigue point of view, the structural details and especially, the welded details are the most prone to fatigue damage and failure. Design against fatigue requires information on the fatigue resistance of a structure’s critical details and the stress loads that act on each detail. Even though, dynamic simulation of flexible bodies is already current method for analyzing structures, obtaining the stress history of a structural detail during dynamic simulation is a challenging task; especially when the detail has a complex geometry. In particular, analyzing the stress history of every structural detail within a single finite element model can be overwhelming since the amount of nodal degrees of freedom needed in the model may require an impractical amount of computational effort. The purpose of computer simulation is to reduce amount of prototypes and speed up the product development process. Also, to take operator influence into account, real time models, i.e. simplified and computationally efficient models are required. This in turn, requires stress computation to be efficient if it will be performed during dynamic simulation. The research looks back at the theoretical background of multibody dynamic simulation and finite element method to find suitable parts to form a new approach for efficient stress calculation. This study proposes that, the problem of stress calculation during dynamic simulation can be greatly simplified by using a combination of floating frame of reference formulation with modal superposition and a sub-modeling approach. In practice, the proposed approach can be used to efficiently generate the relevant fatigue assessment stress history for a structural detail during or after dynamic simulation. In this work numerical examples are presented to demonstrate the proposed approach in practice. The results show that approach is applicable and can be used as proposed.
Resumo:
Megillat ha-Megalleh av Abraham bar Hijja (Spanien, 12. årh.) är bäst känd som en samling av messianska beräkningar. Men boken som helhet innehåller, vid sidan om beräkningarna, varierande innehåll såsom filosofi, bibeltolkning och astrologi. Bar Hijja framför en argumentering, inför det växande inflytandet på judarna från de kristna, för att den judiska religionen, och särskilt judarnas väntan på den messianska tiden, fortfarande är giltiga. Bar Hijja utvecklar, med hjälp av den judiska traditionen, arabiska vetenskapliga och andra källor samt omdefinierade kristna ideér, en syn på historien som en determinisk händelselopp som består av goda och dåliga tider men eventuellt skall kulminera i en messiansk tid för judarna. Boken innehåller även en omfattande astrologisk kommentar på historien, som bland annat beskriver uppkomsten och historien av de kristna och muslimska världsmakterna, samt deras förhållanden med judar. Boken är tydligt avsett att övertyga judarna att förbli judar, genom att argumentera med hjälp av både judiskt och ickejudiskt material, att oavsett läget i exil har de den framtid som de väntat för. I det medeltida sammanhanget syns detta inte enbart som en religiös fråga utan också som en politisk strävan till att säkerställa det judiska samfundets framtid.
Resumo:
By coupling the Boundary Element Method (BEM) and the Finite Element Method (FEM) an algorithm that combines the advantages of both numerical processes is developed. The main aim of the work concerns the time domain analysis of general three-dimensional wave propagation problems in elastic media. In addition, mathematical and numerical aspects of the related BE-, FE- and BE/FE-formulations are discussed. The coupling algorithm allows investigations of elastodynamic problems with a BE- and a FE-subdomain. In order to observe the performance of the coupling algorithm two problems are solved and their results compared to other numerical solutions.
Resumo:
This article discusses three possible ways to derive time domain boundary integral representations for elastodynamics. This discussion points out possible difficulties found when using those formulations to deal with practical applications. The discussion points out recommendations to select the convenient integral representation to deal with elastodynamic problems and opens the possibility of deriving simplified schemes. The proper way to take into account initial conditions applied to the body is an interesting topict shown. It illustrates the main differences between the discussed boundary integral representation expressions, their singularities and possible numerical problems. The correct way to use collocation points outside the analyzed domain is carefully described. Some applications are shown at the end of the paper, in order to demonstrate the capabilities of the technique when properly used.