903 resultados para Tests for Continuous Lifetime Data
Resumo:
This paper describes an automatic device for in situ and continuous monitoring of the ageing process occurring in natural and synthetic resins widely used in art and in the conservation and restoration of cultural artefacts. The results of tests carried out under accelerated ageing conditions are also presented. This easy-to-assemble palm-top device, essentially consists of oscillators based on quartz crystal resonators coated with films of the organic materials whose response to environmental stress is to be addressed. The device contains a microcontroller which selects at pre-defined time intervals the oscillators and records and stores their oscillation frequency. The ageing of the coatings, caused by the environmental stress and resulting in a shift in the oscillation frequency of the modified crystals, can be straightforwardly monitored in this way. The kinetics of this process reflects the level of risk damage associated with a specific microenvironment. In this case, natural and artificial resins, broadly employed in art and restoration of artistic and archaeological artefacts (dammar and Paraloid B72), were applied onto the crystals. The environmental stress was represented by visible and UV radiation, since the chosen materials are known to be photochemically active, to different extents. In the case of dammar, the results obtained are consistent with previous data obtained using a bench-top equipment by impedance analysis through discrete measurements and confirm that the ageing of this material is reflected in the gravimetric response of the modified quartz crystals. As for Paraloid B72, the outcome of the assays indicates that the resin is resistant to visible light, but is very sensitive to UV irradiation. The use of a continuous monitoring system, apart from being obviously more practical, is essential to identify short-term (i.e. reversible) events, like water vapour adsorption/desorption processes, and to highlight ageing trends or sudden changes of such trends. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This project is based on Artificial Intelligence (A.I) and Digital Image processing (I.P) for automatic condition monitoring of sleepers in the railway track. Rail inspection is a very important task in railway maintenance for traffic safety issues and in preventing dangerous situations. Monitoring railway track infrastructure is an important aspect in which the periodical inspection of rail rolling plane is required.Up to the present days the inspection of the railroad is operated manually by trained personnel. A human operator walks along the railway track searching for sleeper anomalies. This monitoring way is not more acceptable for its slowness and subjectivity. Hence, it is desired to automate such intuitive human skills for the development of more robust and reliable testing methods. Images of wooden sleepers have been used as data for my project. The aim of this project is to present a vision based technique for inspecting railway sleepers (wooden planks under the railway track) by automatic interpretation of Non Destructive Test (NDT) data using A.I. techniques in determining the results of inspection.
Resumo:
Continuous casting is a casting process that produces steel slabs in a continuous manner with steel being poured at the top of the caster and a steel strand emerging from the mould below. Molten steel is transferred from the AOD converter to the caster using a ladle. The ladle is designed to be strong and insulated. Complete insulation is never achieved. Some of the heat is lost to the refractories by convection and conduction. Heat losses by radiation also occur. It is important to know the temperature of the melt during the process. For this reason, an online model was previously developed to simulate the steel and ladle wall temperatures during the ladle cycle. The model was developed as an ODE based model using grey box modeling technique. The model’s performance was acceptable and needed to be presented in a user friendly way. The aim of this thesis work was basically to design a GUI that presents steel and ladle wall temperatures calculated by the model and also allow the user to make adjustments to the model. This thesis work also discusses the sensitivity analysis of different parameters involved and their effects on different temperature estimations.
Resumo:
Concentrated solar power systems are expected to be sited in desert locations where the direct normal irradiation is above 1800 kWh/m2.year. These systems include large solar collector assemblies, which account for a significant share of the investment cost. Solarreflectors are the main components of these solar collector assemblies and dust/sand storms may affect their reflectance properties, either by soiling or by surface abrasion. While soiling can be reverted by cleaning, surface abrasion is a non reversible degradation.The aim of this project was to study the accelerated aging of second surface silvered thickglass solar reflectors under simulated sandstorm conditions and develop a multi-parametric model which relates the specular reflectance loss to dust/sand storm parameters: wind velocity, dust concentration and time of exposure. This project focused on the degradation caused by surface abrasion.Sandstorm conditions were simulated in a prototype environmental test chamber. Material samples (6cm x 6cm) were exposed to Arizona coarse test dust. The dust stream impactedthese material samples at a perpendicular angle. Both wind velocity and dust concentrationwere maintained at a stable level for each accelerated aging test. The total exposure time in the test chamber was limited to 1 hour. Each accelerated aging test was interrupted every 4 minutes to measure the specular reflectance of the material sample after cleaning.The accelerated aging test campaign had to be aborted prematurely due to a contamination of the dust concentration sensor. A robust multi-parametric degradation model could thus not be derived. The experimental data showed that the specular reflectance loss decreasedeither linearly or exponentially with exposure time, so that a degradation rate could be defined as a single modeling parameter. A correlation should be derived to relate this degradation rate to control parameters such as wind velocity and dust/sand concentration.The sandstorm chamber design would have to be updated before performing further accelerated aging test campaigns. The design upgrade should improve both the reliability of the test equipment and the repeatability of accelerated aging tests. An outdoor exposure test campaign should be launched in deserts to learn more about the intensity, frequencyand duration of dust/sand storms. This campaign would also serve to correlate the results of outdoor exposure tests with accelerated exposure tests in order to develop a robust service lifetime prediction model for different types of solar reflector materials.
Resumo:
This paper generalizes the HEGY-type test to detect seasonal unit roots in data at any frequency, based on the seasonal unit root tests in univariate time series by Hylleberg, Engle, Granger and Yoo (1990). We introduce the seasonal unit roots at first, and then derive the mechanism of the HEGY-type test for data with any frequency. Thereafter we provide the asymptotic distributions of our test statistics when different test regressions are employed. We find that the F-statistics for testing conjugation unit roots have the same asymptotic distributions. Then we compute the finite-sample and asymptotic critical values for daily and hourly data by a Monte Carlo method. The power and size properties of our test for hourly data is investigated, and we find that including lag augmentations in auxiliary regression without lag elimination have the smallest size distortion and tests with seasonal dummies included in auxiliary regression have more power than the tests without seasonal dummies. At last we apply the our test to hourly wind power production data in Sweden and shows there are no seasonal unit roots in the series.
Resumo:
Background. Continuous subcutaneous insulin infusion (CSII) treatment among children with type 1 diabetes is increasing in Sweden. However, studies evaluating glycaemic control in children using CSII show inconsistent results. Omitting bolus insulin doses using CSII may cause reduced glycaemic control among adolescents. The distribution of responsibility for diabetes self-management between children and parents is often unclear and needs clarification. There is much published support for continued parental involvement and shared diabetes management during adolescence. Guided Self-Determination (GSD) is an empowerment-based, person-centred, reflection and problem solving method intended to guide the patient to become self-sufficient and develop life skills for managing difficulties in diabetes self-management. This method has been adapted for adolescents and parents as Guided Self-Determination-Young (GSD-Y). This study aims to evaluate the effect of an intervention with GSD-Y in groups of adolescents starting on insulin pumps and their parents on diabetes-related family conflicts, perceived health and quality of life (QoL), and metabolic control. Here, we describe the protocol and plans for study enrolment. Methods. This study is designed as a randomized, controlled, prospective, multicentre study. Eighty patients between 12-18 years of age who are planning to start CSII will be included. All adolescents and their parents will receive standard insulin pump training. The education intervention will be conducted when CSII is to be started and at four appointments in the first 4 months after starting CSII. The primary outcome is haemoglobin A1c levels. Secondary outcomes are perceived health and QoL, frequency of blood glucose self-monitoring and bolus doses, and usage of carbohydrate counting. The following instruments will be used to evaluate perceived health and QoL: Disabkids, 'Check your health', the Diabetes Family Conflict Scale and the Swedish Diabetes Empowerment Scale. Outcomes will be evaluated within and between groups by comparing data at baseline, and at 6 and 12 months after starting treatment. Results and discussion. In this study, we will assess the effect of starting an insulin pump together with the model of Guided Self-Determination to determine whether this approach leads to retention of improved glycaemic control, QoL, responsibility distribution and reduced diabetes-related conflicts in the family. Trial registration: Current controlled trials: ISRCTN22444034
Resumo:
Parkinson’s disease (PD) is an increasing neurological disorder in an aging society. The motor and non-motor symptoms of PD advance with the disease progression and occur in varying frequency and duration. In order to affirm the full extent of a patient’s condition, repeated assessments are necessary to adjust medical prescription. In clinical studies, symptoms are assessed using the unified Parkinson’s disease rating scale (UPDRS). On one hand, the subjective rating using UPDRS relies on clinical expertise. On the other hand, it requires the physical presence of patients in clinics which implies high logistical costs. Another limitation of clinical assessment is that the observation in hospital may not accurately represent a patient’s situation at home. For such reasons, the practical frequency of tracking PD symptoms may under-represent the true time scale of PD fluctuations and may result in an overall inaccurate assessment. Current technologies for at-home PD treatment are based on data-driven approaches for which the interpretation and reproduction of results are problematic. The overall objective of this thesis is to develop and evaluate unobtrusive computer methods for enabling remote monitoring of patients with PD. It investigates first-principle data-driven model based novel signal and image processing techniques for extraction of clinically useful information from audio recordings of speech (in texts read aloud) and video recordings of gait and finger-tapping motor examinations. The aim is to map between PD symptoms severities estimated using novel computer methods and the clinical ratings based on UPDRS part-III (motor examination). A web-based test battery system consisting of self-assessment of symptoms and motor function tests was previously constructed for a touch screen mobile device. A comprehensive speech framework has been developed for this device to analyze text-dependent running speech by: (1) extracting novel signal features that are able to represent PD deficits in each individual component of the speech system, (2) mapping between clinical ratings and feature estimates of speech symptom severity, and (3) classifying between UPDRS part-III severity levels using speech features and statistical machine learning tools. A novel speech processing method called cepstral separation difference showed stronger ability to classify between speech symptom severities as compared to existing features of PD speech. In the case of finger tapping, the recorded videos of rapid finger tapping examination were processed using a novel computer-vision (CV) algorithm that extracts symptom information from video-based tapping signals using motion analysis of the index-finger which incorporates a face detection module for signal calibration. This algorithm was able to discriminate between UPDRS part III severity levels of finger tapping with high classification rates. Further analysis was performed on novel CV based gait features constructed using a standard human model to discriminate between a healthy gait and a Parkinsonian gait. The findings of this study suggest that the symptom severity levels in PD can be discriminated with high accuracies by involving a combination of first-principle (features) and data-driven (classification) approaches. The processing of audio and video recordings on one hand allows remote monitoring of speech, gait and finger-tapping examinations by the clinical staff. On the other hand, the first-principles approach eases the understanding of symptom estimates for clinicians. We have demonstrated that the selected features of speech, gait and finger tapping were able to discriminate between symptom severity levels, as well as, between healthy controls and PD patients with high classification rates. The findings support suitability of these methods to be used as decision support tools in the context of PD assessment.
Resumo:
Background: In Chile, mothers and newborns are separated after caesarean sections. The caesarean section rate in Chile is approximately 40%. Once separated, newborns will miss out on the benefits of early contact unless a suitable model of early newborn contact after caesarean section is initiated. Aim: To describe mothers experiences and perceptions of a continuous parental model of newborn care after caesarean section during mother-infant separation. Methods: A questionnaire with 4 open ended questions to gather data on the experiences and perceptions of 95 mothers in the obstetric service of Sótero Del Rio Hospital in Chile between 2009 and 2012. Data were analyzed using qualitative content analysis. Results: One theme family friendly practice after caesarean section and four categories. Mothers described the benefits of this model of caring. The fathers presence was important to mother and baby. Mothers were reassured that the baby was not left alone with staff. It was important for the mothers to see that the father could love the baby as much as the mother. This model of care helped create ties between the father and newborn during the period of mother-infant separation and later with the mother. Conclusions: Family friendly practice after caesarean section was an important health care intervention for the whole family. This model could be stratified in the Chilean context in the case of complicated births and all caesarean sections. Clinical Implications: In the Chilean context, there is the potential to increase the number of parents who get to hold their baby immediately after birth and for as long as they like. When the mother and infant are separated after birth, parents can be informed about the benefits of this caring model. Further research using randomized control trials may support biological advantages.
Resumo:
The accurate measurement of a vehicle’s velocity is an essential feature in adaptive vehicle activated sign systems. Since the velocities of the vehicles are acquired from a continuous wave Doppler radar, the data collection becomes challenging. Data accuracy is sensitive to the calibration of the radar on the road. However, clear methodologies for in-field calibration have not been carefully established. The signs are often installed by subjective judgment which results in measurement errors. This paper develops a calibration method based on mining the data collected and matching individual vehicles travelling between two radars. The data was cleaned and prepared in two ways: cleaning and reconstructing. The results showed that the proposed correction factor derived from the cleaned data corresponded well with the experimental factor done on site. In addition, this proposed factor showed superior performance to the one derived from the reconstructed data.
Resumo:
Basic information theory is used to analyse the amount of confidential information which may be leaked by programs written in a very simple imperative language. In particular, a detailed analysis is given of the possible leakage due to equality tests and if statements. The analysis is presented as a set of syntax-directed inference rules and can readily be automated.
Resumo:
This paper investigates the presence of long memory in financiaI time series using four test statistics: V/S, KPSS, KS and modified R/S. There has been a large amount of study on the long memory behavior in economic and financiaI time series. However, there is still no consensus. We argue in this paper that spurious short-term memory may be found due to the incorrect use of data-dependent bandwidth to estimating the longrun variance. We propose a partially adaptive lag truncation procedure that is robust against the presence of long memory under the alternative hypothesis and revisit several economic and financiaI time series using the proposed bandwidth choice. Our results indicate the existence of spurious short memory in real exchange rates when Andrews' formula is employed, but long memory is detected when the proposed lag truncation procedure is used. Using stock market data, we also found short memory in returns and long memory in volatility.
Resumo:
Tendo como motivação o desenvolvimento de uma representação gráfica de redes com grande número de vértices, útil para aplicações de filtro colaborativo, este trabalho propõe a utilização de superfícies de coesão sobre uma base temática multidimensionalmente escalonada. Para isso, utiliza uma combinação de escalonamento multidimensional clássico e análise de procrustes, em algoritmo iterativo que encaminha soluções parciais, depois combinadas numa solução global. Aplicado a um exemplo de transações de empréstimo de livros pela Biblioteca Karl A. Boedecker, o algoritmo proposto produz saídas interpretáveis e coerentes tematicamente, e apresenta um stress menor que a solução por escalonamento clássico.
Resumo:
Panel cointegration techniques applied to pooled data for 27 economies for the period 1960-2000 indicate that: i) government spending in education and innovation indicators are cointegrated; ii) education hierarchy is relevant when explaining innovation; and iii) the relation between education and innovation can be obtained after an accommodation of a level structural break.
Resumo:
It is well known that cointegration between the level of two variables (labeled Yt and yt in this paper) is a necessary condition to assess the empirical validity of a present-value model (PV and PVM, respectively, hereafter) linking them. The work on cointegration has been so prevalent that it is often overlooked that another necessary condition for the PVM to hold is that the forecast error entailed by the model is orthogonal to the past. The basis of this result is the use of rational expectations in forecasting future values of variables in the PVM. If this condition fails, the present-value equation will not be valid, since it will contain an additional term capturing the (non-zero) conditional expected value of future error terms. Our article has a few novel contributions, but two stand out. First, in testing for PVMs, we advise to split the restrictions implied by PV relationships into orthogonality conditions (or reduced rank restrictions) before additional tests on the value of parameters. We show that PV relationships entail a weak-form common feature relationship as in Hecq, Palm, and Urbain (2006) and in Athanasopoulos, Guillén, Issler and Vahid (2011) and also a polynomial serial-correlation common feature relationship as in Cubadda and Hecq (2001), which represent restrictions on dynamic models which allow several tests for the existence of PV relationships to be used. Because these relationships occur mostly with nancial data, we propose tests based on generalized method of moment (GMM) estimates, where it is straightforward to propose robust tests in the presence of heteroskedasticity. We also propose a robust Wald test developed to investigate the presence of reduced rank models. Their performance is evaluated in a Monte-Carlo exercise. Second, in the context of asset pricing, we propose applying a permanent-transitory (PT) decomposition based on Beveridge and Nelson (1981), which focus on extracting the long-run component of asset prices, a key concept in modern nancial theory as discussed in Alvarez and Jermann (2005), Hansen and Scheinkman (2009), and Nieuwerburgh, Lustig, Verdelhan (2010). Here again we can exploit the results developed in the common cycle literature to easily extract permament and transitory components under both long and also short-run restrictions. The techniques discussed herein are applied to long span annual data on long- and short-term interest rates and on price and dividend for the U.S. economy. In both applications we do not reject the existence of a common cyclical feature vector linking these two series. Extracting the long-run component shows the usefulness of our approach and highlights the presence of asset-pricing bubbles.
Resumo:
The objective of this paper is to test for optimality of consumption decisions at the aggregate level (representative consumer) taking into account popular deviations from the canonical CRRA utility model rule of thumb and habit. First, we show that rule-of-thumb behavior in consumption is observational equivalent to behavior obtained by the optimizing model of King, Plosser and Rebelo (Journal of Monetary Economics, 1988), casting doubt on how reliable standard rule-of-thumb tests are. Second, although Carroll (2001) and Weber (2002) have criticized the linearization and testing of euler equations for consumption, we provide a deeper critique directly applicable to current rule-of-thumb tests. Third, we show that there is no reason why return aggregation cannot be performed in the nonlinear setting of the Asset-Pricing Equation, since the latter is a linear function of individual returns. Fourth, aggregation of the nonlinear euler equation forms the basis of a novel test of deviations from the canonical CRRA model of consumption in the presence of rule-of-thumb and habit behavior. We estimated 48 euler equations using GMM, with encouraging results vis-a-vis the optimality of consumption decisions. At the 5% level, we only rejected optimality twice out of 48 times. Empirical-test results show that we can still rely on the canonical CRRA model so prevalent in macroeconomics: out of 24 regressions, we found the rule-of-thumb parameter to be statistically signi cant at the 5% level only twice, and the habit ƴ parameter to be statistically signi cant on four occasions. The main message of this paper is that proper return aggregation is critical to study intertemporal substitution in a representative-agent framework. In this case, we fi nd little evidence of lack of optimality in consumption decisions, and deviations of the CRRA utility model along the lines of rule-of-thumb behavior and habit in preferences represent the exception, not the rule.