906 resultados para Classical measurement error model
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o®-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.
Resumo:
BACKGROUND: Left atrial (LA) dilatation is associated with a large variety of cardiac diseases. Current cardiovascular magnetic resonance (CMR) strategies to measure LA volumes are based on multi-breath-hold multi-slice acquisitions, which are time-consuming and susceptible to misregistration. AIM: To develop a time-efficient single breath-hold 3D CMR acquisition and reconstruction method to precisely measure LA volumes and function. METHODS: A highly accelerated compressed-sensing multi-slice cine sequence (CS-cineCMR) was combined with a non-model-based 3D reconstruction method to measure LA volumes with high temporal and spatial resolution during a single breath-hold. This approach was validated in LA phantoms of different shapes and applied in 3 patients. In addition, the influence of slice orientations on accuracy was evaluated in the LA phantoms for the new approach in comparison with a conventional model-based biplane area-length reconstruction. As a reference in patients, a self-navigated high-resolution whole-heart 3D dataset (3D-HR-CMR) was acquired during mid-diastole to yield accurate LA volumes. RESULTS: Phantom studies. LA volumes were accurately measured by CS-cineCMR with a mean difference of -4.73 ± 1.75 ml (-8.67 ± 3.54%, r2 = 0.94). For the new method the calculated volumes were not significantly different when different orientations of the CS-cineCMR slices were applied to cover the LA phantoms. Long-axis "aligned" vs "not aligned" with the phantom long-axis yielded similar differences vs the reference volume (-4.87 ± 1.73 ml vs. -4.45 ± 1.97 ml, p = 0.67) and short-axis "perpendicular" vs. "not-perpendicular" with the LA long-axis (-4.72 ± 1.66 ml vs. -4.75 ± 2.13 ml; p = 0.98). The conventional bi-plane area-length method was susceptible for slice orientations (p = 0.0085 for the interaction of "slice orientation" and "reconstruction technique", 2-way ANOVA for repeated measures). To use the 3D-HR-CMR as the reference for LA volumes in patients, it was validated in the LA phantoms (mean difference: -1.37 ± 1.35 ml, -2.38 ± 2.44%, r2 = 0.97). Patient study: The CS-cineCMR LA volumes of the mid-diastolic frame matched closely with the reference LA volume (measured by 3D-HR-CMR) with a difference of -2.66 ± 6.5 ml (3.0% underestimation; true LA volumes: 63 ml, 62 ml, and 395 ml). Finally, a high intra- and inter-observer agreement for maximal and minimal LA volume measurement is also shown. CONCLUSIONS: The proposed method combines a highly accelerated single-breathhold compressed-sensing multi-slice CMR technique with a non-model-based 3D reconstruction to accurately and reproducibly measure LA volumes and function.
Resumo:
Electrical impedance tomography (EIT) allows the measurement of intra-thoracic impedance changes related to cardiovascular activity. As a safe and low-cost imaging modality, EIT is an appealing candidate for non-invasive and continuous haemodynamic monitoring. EIT has recently been shown to allow the assessment of aortic blood pressure via the estimation of the aortic pulse arrival time (PAT). However, finding the aortic signal within EIT image sequences is a challenging task: the signal has a small amplitude and is difficult to locate due to the small size of the aorta and the inherent low spatial resolution of EIT. In order to most reliably detect the aortic signal, our objective was to understand the effect of EIT measurement settings (electrode belt placement, reconstruction algorithm). This paper investigates the influence of three transversal belt placements and two commonly-used difference reconstruction algorithms (Gauss-Newton and GREIT) on the measurement of aortic signals in view of aortic blood pressure estimation via EIT. A magnetic resonance imaging based three-dimensional finite element model of the haemodynamic bio-impedance properties of the human thorax was created. Two simulation experiments were performed with the aim to (1) evaluate the timing error in aortic PAT estimation and (2) quantify the strength of the aortic signal in each pixel of the EIT image sequences. Both experiments reveal better performance for images reconstructed with Gauss-Newton (with a noise figure of 0.5 or above) and a belt placement at the height of the heart or higher. According to the noise-free scenarios simulated, the uncertainty in the analysis of the aortic EIT signal is expected to induce blood pressure errors of at least ± 1.4 mmHg.
Resumo:
Les syndromes de déficiences cérébrales en créatine (CCDS) sont dus à des mutations dans les gènes GATM et G AMT (codant pour les enzymes AGAT et G AMT de la voie de synthèse de créatine) ainsi que SLC6A8 (transporteur de créatine), et génèrent une absence ou une très forte baisse de créatine (Cr) dans le cerveau, mesurée par spectroscopic de résonance magnétique. Les patients CCDS développent des handicaps neurologiques sévères. Les patients AGAT et GAMT peuvent être traités avec des doses importantes de Cr, mais gardent dans la plupart des cas des séquelles neurologiques irréversibles. Aucun traitement efficace n'existe à ce jour pour la déficience en SLC6A8. Bien que de nombreux modèles aient été développés pour comprendre la Cr cérébrale en conditions physiologiques, les pathomécanismes des CCDS ne sont pas encore compris. Des souris transgéniques pour les gènes Gatm, Gamt et Slc6a8 ont été générées, mais elles ne miment que partiellement la pathologie humaine. Parmi les CCDS, la déficience en GAMT est la plus sévère, en raison de l'accumulation cérébrale de l'intermédiaire guanidinoacétate (GAA). Alors que la toxicité cérébrale du GAA a été étudiée par exposition directe au GAA d'animaux adultes sains, les mécanismes de la toxicité du GAA en condition de déficience en GAMT dans le cerveau en développement sont encore inconnus. Le but de ce projet était donc de développer un modèle de déficience en GAMT dans des cultures 3D primaires de cellules nerveuses de rat en agrégats par knock-down du gène GAMT, en utilisant un virus adéno-associé (AAV) induisant le mécanisme d'interférence à l'ARN (RNAi). Le virus scAAV2, à la multiplicité d'infection de 1000, s'est révélé le plus efficace pour transduire tous les types de cellules nerveuses des cultures (neurones, astrocytes, oligodendrocytes), et générer un knock-down maximal de la protéine GAMT de 85% (jour in vitro 18). Cette déficience partielle en GAMT s'est révélée insuffisante pour générer une déficience en Cr, mais a causé l'accumulation attendue de GAA, à des doses comparables aux niveaux observés dans le LCR des patients GAMT. Le GAA a induit une croissance axonale anarchique accompagnée d'une baisse de l'apoptose naturelle, suivis par une induction tardive de mort cellulaire non-apoptotique. Le co-traitement par la Cr a prévenu tous les effets toxiques du GAA. Ce travail montre que l'accumulation de GAA en absence de déficience en Cr est suffisante pour affecter le développement du tissu nerveux, et suggère que des formes de déficiences en GAMT supplémentaires, ne présentant pas de déficiences en Cr, pourraient être découvertes par mesure du GAA, en particulier à travers les programmes récemment proposés de dépistage néonatal de la déficience en GAMT. -- Cerebral creatine deficiency syndromes (CCDS) are caused by mutations in the genes GATM and GAMT (respectively coding for the two enzymes of the creatine synthetic pathway, AGAT and GAMT) as well as SLC6A8 (creatine transporter), and lead to the absence or very strong decrease of creatine (Cr) in the brain when measured by magnetic resonance spectroscopy. Affected patients show severe neurological impairments. While AGAT and GAMT deficient patients can be treated with high dosages of Cr, most remain with irreversible brain sequelae. No treatment has been successful so far for SLC6A8 deficiency. While many models have helped understanding the cerebral Cr pathways in physiological conditions, the pathomechanisms underlying CCDS are yet to be elucidated. Transgenic mice carrying mutations in the Gatm, Gamt and Slc6a8 genes have been developed, but only partially mimic the human pathology. Among CCDS, GAMT deficiency is the most severe, due to the CNS accumulation of the guanidinoacetate (GAA) intermediate. While brain toxicity of GAA has been explored through direct GAA exposure of adult healthy animals, the mechanisms underlying GAA toxicity in GAMT deficiency conditions on the developing CNS are yet unknown. The aim of this project was thus to develop and characterize a GAMT deficiency model in developing brain cells by gene knockdown, by adeno-associated virus (AAV)-driven RNA interference (RNAi) in rat 3D organotypic primary brain cell cultures in aggregates. scAAV2 with a multiplicity of infection of 1000 was shown as the most efficient serotype, was able to transduce all brain cell types (neurons, astrocytes, oligodendrocytes) and to induce a maximal GAMT protein knockdown of 85% (day in vitro 18). Metabolite analysis showed that partial GAMT knockdown was insufficient to induce Cr deficiency but generated the awaited GAA accumulation at concentrations comparable to the levels observed in cerebrospinal fluid of GAMT-deficient patients. Accumulated GAA induced axonal hypersprouting paralleled with inhibition of natural apoptosis, followed by a later induction in non-apoptotic cell death. Cr supplementation led to the prevention of all GAA-induced toxic effects. This work shows that GAA accumulation without Cr deficiency is sufficient to affect CNS development, and suggests that additional partial GAMT deficiencies, which may not show the classical brain Cr deficiency, may be discovered through GAA measurement including by recently proposed neonatal screening programs for GAMT deficiency.
Resumo:
Over the last decades, calibration techniques have been widely used to improve the accuracy of robots and machine tools since they only involve software modification instead of changing the design and manufacture of the hardware. Traditionally, there are four steps are required for a calibration, i.e. error modeling, measurement, parameter identification and compensation. The objective of this thesis is to propose a method for the kinematics analysis and error modeling of a newly developed hybrid redundant robot IWR (Intersector Welding Robot), which possesses ten degrees of freedom (DOF) where 6-DOF in parallel and additional 4-DOF in serial. In this article, the problem of kinematics modeling and error modeling of the proposed IWR robot are discussed. Based on the vector arithmetic method, the kinematics model and the sensitivity model of the end-effector subject to the structure parameters is derived and analyzed. The relations between the pose (position and orientation) accuracy and manufacturing tolerances, actuation errors, and connection errors are formulated. Computer simulation is performed to examine the validity and effectiveness of the proposed method.
Resumo:
In this Master’s thesis agent-based modeling has been used to analyze maintenance strategy related phenomena. The main research question that has been answered was: what does the agent-based model made for this study tell us about how different maintenance strategy decisions affect profitability of equipment owners and maintenance service providers? Thus, the main outcome of this study is an analysis of how profitability can be increased in industrial maintenance context. To answer that question, first, a literature review of maintenance strategy, agent-based modeling and maintenance modeling and optimization was conducted. This review provided the basis for making the agent-based model. Making the model followed a standard simulation modeling procedure. With the simulation results from the agent-based model the research question was answered. Specifically, the results of the modeling and this study are: (1) optimizing the point in which a machine is maintained increases profitability for the owner of the machine and also the maintainer with certain conditions; (2) time-based pricing of maintenance services leads to a zero-sum game between the parties; (3) value-based pricing of maintenance services leads to a win-win game between the parties, if the owners of the machines share a substantial amount of their value to the maintainers; and (4) error in machine condition measurement is a critical parameter to optimizing maintenance strategy, and there is real systemic value in having more accurate machine condition measurement systems.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
This thesis was carried out as a case study of a company YIT in order to clarify the sev-erest risks for the company and to build a method for project portfolio evaluation. The target organization creates new living environment by constructing residential buildings, business premises, infrastructure and entire areas worth for EUR 1.9 billion in the year 2013. Company has noted project portfolio management needs more information about the structure of project portfolio and possible influences of market shock situation. With interviews have been evaluated risks with biggest influence and most appropriate metrics to examine. The major risks for the company were evaluated by interviewing the executive staff. At the same time, the most appropriate risk metrics were considered. At the moment sales risk was estimated to have biggest impact on company‟s business. Therefore project port-folio evaluation model was created and three different scenarios for company‟s future were created in order to identify the scale of possible market shock situation. The created model is tested with public and descriptive figures of YIT in a one-year-long market shock and the impact on different metrics was evaluated. Study was conducted using con-structive research methodology. Results indicate that company has notable sales risk in certain sections of business portfolio.
Resumo:
We present the results obtained with a ureterovesical implant after ipsilateral ureteral obstruction in the rat, suitable for the study of renal function after deobstruction in these animals. Thirty-seven male Wistar rats weighing 260 to 300 g were submitted to distal right ureteral ligation and divided into 3 groups, A (N = 13, 1 week of obstruction), B (N = 14, 2 weeks of obstruction) and C (N = 10, 3 weeks of obstruction). The animals were then submitted to ureterovesical implantation on the right side and nephrectomy on the left side. During the 4-week follow-up period serum levels of urea and creatinine were measured on the 2nd, 7th, 14th, 21st and 28th day and compared with preoperative levels. The ureterovesical implantation included a psoas hitch procedure and the ureter was pulled into the bladder using a transvesical suture. During the first week of the postoperative period 8 animals died, 4/13 in group A (1 week of obstruction) and 4/14 in group B (2 weeks of obstruction). When compared to preoperative serum levels, urea and creatinine showed a significant increase (P<0.05) on the 2nd postoperative day in groups A and B, with a gradual return to lower levels. However, the values in group B animals were higher than those in group A at the end of the follow-up. In group C, 2/10 animals (after 3 weeks of obstruction) were sacrificed at the time of ureterovesical implantation due to infection of the obstructed kidneys. The remaining animals in this group were operated upon but all of them died during the first week of follow-up due to renal failure. This technique of ureterovesical implantation in the rat provides effective drainage of the upper urinary tract, permitting the development of an experimental model for the study of long-term renal function after a period of ureteral obstruction
Resumo:
The power is still today an issue in wearable computing applications. The aim of the present paper is to raise awareness of the power consumption of wearable computing devices in specific scenarios to be able in the future to design energy efficient wireless sensors for context recognition in wearable computing applications. The approach is based on a hardware study. The objective of this paper is to analyze and compare the total power consumption of three representative wearable computing devices in realistic scenarios such as Display, Speaker, Camera and microphone, Transfer by Wi-Fi, Monitoring outdoor physical activity and Pedometer. A scenario based energy model is also developed. The Samsung Galaxy Nexus I9250 smartphone, the Vuzix M100 Smart Glasses and the SimValley Smartwatch AW-420.RX are the three devices representative of their form factors. The power consumption is measured using PowerTutor, an android energy profiler application with logging option and using unknown parameters so it is adjusted with the USB meter. The result shows that the screen size is the main parameter influencing the power consumption. The power consumption for an identical scenario varies depending on the wearable devices meaning that others components, parameters or processes might impact on the power consumption and further study is needed to explain these variations. This paper also shows that different inputs (touchscreen is more efficient than buttons controls) and outputs (speaker sensor is more efficient than display sensor) impact the energy consumption in different way. This paper gives recommendations to reduce the energy consumption in healthcare wearable computing application using the energy model.
Resumo:
This research studied the project performance measurement from the perspective of strategic management. The objective was to find a generic model for project performance measurement that emphasizes strategy and decision making. Research followed the guidelines of a constructive research methodology. As a result, the study suggests a model that measures projects with multiple meters during and after projects. Measurement after the project is suggested to be linked to the strategic performance measures of a company. The measurement should be conducted with centralized project portfolio management e.g. using the project management office in the organization. Metrics, after the project, measure the project’s actual benefit realization. During the project, the metrics are universal and they measure the accomplished objectives relation to costs, schedule and internal resource usage. Outcomes of these measures should be forecasted by using qualitative or stochastic methods. Solid theoretical background for the model was found from the literature that covers the subjects of performance measurement, projects and uncertainty. The study states that the model can be implemented in companies. This statement is supported by empirical evidence from a single case study. The gathering of empiric evidence about the actual usefulness of the model in companies is left to be done by the evaluative research in the future.
Resumo:
Time series analysis has gone through different developmental stages before the current modern approaches. These can broadly categorized as the classical time series analysis and modern time series analysis approach. In the classical one, the basic target of the analysis is to describe the major behaviour of the series without necessarily dealing with the underlying structures. On the contrary, the modern approaches strives to summarize the behaviour of the series going through its underlying structure so that the series can be represented explicitly. In other words, such approach of time series analysis tries to study the series structurally. The components of the series that make up the observation such as the trend, seasonality, regression and disturbance terms are modelled explicitly before putting everything together in to a single state space model which give the natural interpretation of the series. The target of this diploma work is to practically apply the modern approach of time series analysis known as the state space approach, more specifically, the dynamic linear model, to make trend analysis over Ionosonde measurement data. The data is time series of the peak height of F2 layer symbolized by hmF2 which is the height of high electron density. In addition, the work also targets to investigate the connection between solar activity and the peak height of F2 layer. Based on the result found, the peak height of the F2 layer has shown a decrease during the observation period and also shows a nonlinear positive correlation with solar activity.
Resumo:
Our goal in this paper is to assess reliability and validity of egocentered network data using multilevel analysis (Muthen, 1989, Hox, 1993) under the multitrait-multimethod approach. The confirmatory factor analysis model for multitrait-multimethod data (Werts & Linn, 1970; Andrews, 1984) is used for our analyses. In this study we reanalyse a part of data of another study (Kogovšek et al., 2002) done on a representative sample of the inhabitants of Ljubljana. The traits used in our article are the name interpreters. We consider egocentered network data as hierarchical; therefore a multilevel analysis is required. We use Muthen's partial maximum likelihood approach, called pseudobalanced solution (Muthen, 1989, 1990, 1994) which produces estimations close to maximum likelihood for large ego sample sizes (Hox & Mass, 2001). Several analyses will be done in order to compare this multilevel analysis to classic methods of analysis such as the ones made in Kogovšek et al. (2002), who analysed the data only at group (ego) level considering averages of all alters within the ego. We show that some of the results obtained by classic methods are biased and that multilevel analysis provides more detailed information that much enriches the interpretation of reliability and validity of hierarchical data. Within and between-ego reliabilities and validities and other related quality measures are defined, computed and interpreted