983 resultados para Step Length Estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of sample size and statistical power estimation is now something that Optometrists that want to perform research, whether it be in practice or in an academic institution, cannot simply hide away from. Ethics committees, journal editors and grant awarding bodies are now increasingly requesting that all research be backed up with sample size and statistical power estimation in order to justify any study and its findings. This article presents a step-by-step guide of the process for determining sample sizeand statistical power. It builds on statistical concepts presented in earlier articles in Optometry Today by Richard Armstrong and Frank Eperjesi.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the development of superstructure fiber gratings (SFG) in Ge-doped, silica optical fiber using femtosecond laser inscription. We apply a simple but extremely effective single step process to inscribe low loss, sampled gratings with minor polarization dependence. The method results in a controlled modulated index change with complete suppression of mode coupling associated with the overlapping LPG structure leading to highly symmetric superstructure spectra, with the grating reflection well within the Fourier design limit. The devices are characterized and compared with numerical modeling by solving Maxwell's equations and calculating the back reflection spectrum using the bidirectional beam propagation method (BiBPM). Experimental results validate our numerical analysis, allowing for the estimation of inscription parameters such as the ac index modulation change, and the wavelength, position and relative strength of each significant resonance peak. We also present results on temperature and refractive index measurements showing potential for sensing applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the development of superstructure fiber gratings (SFG) in Ge-doped, silica optical fiber using femtosecond laser inscription. We apply a simple but extremely effective single step process to inscribe low loss, sampled gratings with minor polarization dependence. The method results in a controlled modulated index change with complete suppression of mode coupling associated with the overlapping LPG structure leading to highly symmetric superstructure spectra, with the grating reflection well within the Fourier design limit. The devices are characterized and compared with numerical modeling by solving Maxwell's equations and calculating the back reflection spectrum using the bidirectional beam propagation method (BiBPM). Experimental results validate our numerical analysis, allowing for the estimation of inscription parameters such as the ac index modulation change, and the wavelength, position and relative strength of each significant resonance peak. We also present results on temperature and refractive index measurements showing potential for sensing applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The long-term foetal surveillance is often to be recommended. Hence, the fully non-invasive acoustic recording, through maternal abdomen, represents a valuable alternative to the ultrasonic cardiotocography. Unfortunately, the recorded heart sound signal is heavily loaded by noise, thus the determination of the foetal heart rate raises serious signal processing issues. In this paper, we present a new algorithm for foetal heart rate estimation from foetal phonocardiographic recordings. A filtering is employed as a first step of the algorithm to reduce the background noise. A block for first heart sounds enhancing is then used to further reduce other components of foetal heart sound signals. A complex logic block, guided by a number of rules concerning foetal heart beat regularity, is proposed as a successive block, for the detection of most probable first heart sounds from several candidates. A final block is used for exact first heart sound timing and in turn foetal heart rate estimation. Filtering and enhancing blocks are actually implemented by means of different techniques, so that different processing paths are proposed. Furthermore, a reliability index is introduced to quantify the consistency of the estimated foetal heart rate and, based on statistic parameters; [,] a software quality index is designed to indicate the most reliable analysis procedure (that is, combining the best processing path and the most accurate time mark of the first heart sound, provides the lowest estimation errors). The algorithm performances have been tested on phonocardiographic signals recorded in a local gynaecology private practice from a sample group of about 50 pregnant women. Phonocardiographic signals have been recorded simultaneously to ultrasonic cardiotocographic signals in order to compare the two foetal heart rate series (the one estimated by our algorithm and the other provided by cardiotocographic device). Our results show that the proposed algorithm, in particular some analysis procedures, provides reliable foetal heart rate signals, very close to the reference cardiotocographic recordings. © 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Annual average daily traffic (AADT) is important information for many transportation planning, design, operation, and maintenance activities, as well as for the allocation of highway funds. Many studies have attempted AADT estimation using factor approach, regression analysis, time series, and artificial neural networks. However, these methods are unable to account for spatially variable influence of independent variables on the dependent variable even though it is well known that to many transportation problems, including AADT estimation, spatial context is important. ^ In this study, applications of geographically weighted regression (GWR) methods to estimating AADT were investigated. The GWR based methods considered the influence of correlations among the variables over space and the spatially non-stationarity of the variables. A GWR model allows different relationships between the dependent and independent variables to exist at different points in space. In other words, model parameters vary from location to location and the locally linear regression parameters at a point are affected more by observations near that point than observations further away. ^ The study area was Broward County, Florida. Broward County lies on the Atlantic coast between Palm Beach and Miami-Dade counties. In this study, a total of 67 variables were considered as potential AADT predictors, and six variables (lanes, speed, regional accessibility, direct access, density of roadway length, and density of seasonal household) were selected to develop the models. ^ To investigate the predictive powers of various AADT predictors over the space, the statistics including local r-square, local parameter estimates, and local errors were examined and mapped. The local variations in relationships among parameters were investigated, measured, and mapped to assess the usefulness of GWR methods. ^ The results indicated that the GWR models were able to better explain the variation in the data and to predict AADT with smaller errors than the ordinary linear regression models for the same dataset. Additionally, GWR was able to model the spatial non-stationarity in the data, i.e., the spatially varying relationship between AADT and predictors, which cannot be modeled in ordinary linear regression. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The contributions of this dissertation are in the development of two new interrelated approaches to video data compression: (1) A level-refined motion estimation and subband compensation method for the effective motion estimation and motion compensation. (2) A shift-invariant sub-decimation decomposition method in order to overcome the deficiency of the decimation process in estimating motion due to its shift-invariant property of wavelet transform. ^ The enormous data generated by digital videos call for an intense need of efficient video compression techniques to conserve storage space and minimize bandwidth utilization. The main idea of video compression is to reduce the interpixel redundancies inside and between the video frames by applying motion estimation and motion compensation (MEMO) in combination with spatial transform coding. To locate the global minimum of the matching criterion function reasonably, hierarchical motion estimation by coarse to fine resolution refinements using discrete wavelet transform is applied due to its intrinsic multiresolution and scalability natures. ^ Due to the fact that most of the energies are concentrated in the low resolution subbands while decreased in the high resolution subbands, a new approach called level-refined motion estimation and subband compensation (LRSC) method is proposed. It realizes the possible intrablocks in the subbands for lower entropy coding while keeping the low computational loads of motion estimation as the level-refined method, thus to achieve both temporal compression quality and computational simplicity. ^ Since circular convolution is applied in wavelet transform to obtain the decomposed subframes without coefficient expansion, symmetric-extended wavelet transform is designed on the finite length frame signals for more accurate motion estimation without discontinuous boundary distortions. ^ Although wavelet transformed coefficients still contain spatial domain information, motion estimation in wavelet domain is not as straightforward as in spatial domain due to the shift variance property of the decimation process of the wavelet transform. A new approach called sub-decimation decomposition method is proposed, which maintains the motion consistency between the original frame and the decomposed subframes, improving as a consequence the wavelet domain video compressions by shift invariant motion estimation and compensation. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate knowledge of the time since death, or postmortem interval (PMI), has enormous legal, criminological, and psychological impact. In this study, an investigation was made to determine whether the relationship between the degradation of the human cardiac structure protein Cardiac Troponin T and PMI could be used as an indicator of time since death, thus providing a rapid, high resolution, sensitive, and automated methodology for the determination of PMI. ^ The use of Cardiac Troponin T (cTnT), a protein found in heart tissue, as a selective marker for cardiac muscle damage has shown great promise in the determination of PMI. An optimized conventional immunoassay method was developed to quantify intact and fragmented cTnT. A small sample of cardiac tissue, which is less affected than other tissues by external factors, was taken, homogenized, extracted with magnetic microparticles, separated by SDS-PAGE, and visualized with Western blot by probing with monoclonal antibody against cTnT. This step was followed by labeling and available scanners. This conventional immunoassay provides a proper detection and quantitation of cTnT protein in cardiac tissue as a complex matrix; however, this method does not provide the analyst with immediate results. Therefore, a competitive separation method using capillary electrophoresis with laser-induced fluorescence (CE-LIF) was developed to study the interaction between human cTnT protein and monoclonal anti-TroponinT antibody. ^ Analysis of the results revealed a linear relationship between the percent of degraded cTnT and the log of the PMI, indicating that intact cTnT could be detected in human heart tissue up to 10 days postmortem at room temperature and beyond two weeks at 4C. The data presented demonstrates that this technique can provide an extended time range during which PMI can be more accurately estimated as compared to currently used methods. The data demonstrates that this technique represents a major advance in time of death determination through a fast and reliable, semi-quantitative measurement of a biochemical marker from an organ protected from outside factors. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation aimed to improve travel time estimation for the purpose of transportation planning by developing a travel time estimation method that incorporates the effects of signal timing plans, which were difficult to consider in planning models. For this purpose, an analytical model has been developed. The model parameters were calibrated based on data from CORSIM microscopic simulation, with signal timing plans optimized using the TRANSYT-7F software. Independent variables in the model are link length, free-flow speed, and traffic volumes from the competing turning movements. The developed model has three advantages compared to traditional link-based or node-based models. First, the model considers the influence of signal timing plans for a variety of traffic volume combinations without requiring signal timing information as input. Second, the model describes the non-uniform spatial distribution of delay along a link, this being able to estimate the impacts of queues at different upstream locations of an intersection and attribute delays to a subject link and upstream link. Third, the model shows promise of improving the accuracy of travel time prediction. The mean absolute percentage error (MAPE) of the model is 13% for a set of field data from Minnesota Department of Transportation (MDOT); this is close to the MAPE of uniform delay in the HCM 2000 method (11%). The HCM is the industrial accepted analytical model in the existing literature, but it requires signal timing information as input for calculating delays. The developed model also outperforms the HCM 2000 method for a set of Miami-Dade County data that represent congested traffic conditions, with a MAPE of 29%, compared to 31% of the HCM 2000 method. The advantages of the proposed model make it feasible for application to a large network without the burden of signal timing input, while improving the accuracy of travel time estimation. An assignment model with the developed travel time estimation method has been implemented in a South Florida planning model, which improved assignment results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis would not have been possible without the aid of my family, friends, laboratory members, and professors. First and foremost, I would like to thank Dr. Kalai Mathee for allowing me to enter her lab in August 2007 and enabling to embark on this journey. This experience has transformed me into more mature scientist, teaching me how to ask the right questions and the process needed to solve them. I would also like to acknowledge Dr. Lisa Schneper. She has helped me throughout the whole process, by graciously giving me input at every step of the way. I would like to express gratitude to Dr. Jennifer Richards for all her input in writing the thesis. She has been a great teacher and being in her class has been a pleasure. Moreover, I would like to thank all the committee members for their constructive criticism throughout the process. When I entered the lab in August, there was one person who literally was by my side, Melissa Doud. Without your input and guidance I would not have even been able to do these experiments. I would also like to thank you and Dr. Light for allowing me to meet some cystic fibrosis patients. It has allowed me to put a face on the disease, and help the patients' fight. For a period before I had entered the lab, Ms. Doud had an apprentice, who started the fungal aspect of the project, Caroline Veronese. Her initial work has enabled me to prefect the protocols and complete the ITS 1 region.One very unique aspect about Dr. Mathee's lab is the camaraderie. I would like to thank all the lab members for the good times in and out of the lab. These individuals have been able to make smile and laugh in parties and lab meetings. I would like to individually thank Balachandar Dananjeyan, Deepak Balasubramanian, and V arinderpal Singh Pannu for all the PCR help and Natalie Maricic for the laughs and being a great classmate. Last, but not least, I would like to acknowledge my family and friends for their support and keeping me sane: Cecilia, my mother, Mohammad, my father, Amir, my older brother, Billal, my younger brother, Ouday Akkari and Stephanie De Bedout, my best friends.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ulrich and Vorberg (2009) presented a method that fits distinct functions for each order of presentation of standard and test stimuli in a two-alternative forced-choice (2AFC) discrimination task, which removes the contaminating influence of order effects from estimates of the difference limen. The two functions are fitted simultaneously under the constraint that their average evaluates to 0.5 when test and standard have the same magnitude, which was regarded as a general property of 2AFC tasks. This constraint implies that physical identity produces indistinguishability, which is valid when test and standard are identical except for magnitude along the dimension of comparison. However, indistinguishability does not occur at physical identity when test and standard differ on dimensions other than that along which they are compared (e.g., vertical and horizontal lines of the same length are not perceived to have the same length). In these cases, the method of Ulrich and Vorberg cannot be used. We propose a generalization of their method for use in such cases and illustrate it with data from a 2AFC experiment involving length discrimination of horizontal and vertical lines. The resultant data could be fitted with our generalization but not with the method of Ulrich and Vorberg. Further extensions of this method are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Threshold estimation with sequential procedures is justifiable on the surmise that the index used in the so-called dynamic stopping rule has diagnostic value for identifying when an accurate estimate has been obtained. The performance of five types of Bayesian sequential procedure was compared here to that of an analogous fixed-length procedure. Indices for use in sequential procedures were: (1) the width of the Bayesian probability interval, (2) the posterior standard deviation, (3) the absolute change, (4) the average change, and (5) the number of sign fluctuations. A simulation study was carried out to evaluate which index renders estimates with less bias and smaller standard error at lower cost (i.e. lower average number of trials to completion), in both yes–no and two-alternative forced-choice (2AFC) tasks. We also considered the effect of the form and parameters of the psychometric function and its similarity with themodel function assumed in the procedure. Our results show that sequential procedures do not outperform fixed-length procedures in yes–no tasks. However, in 2AFC tasks, sequential procedures not based on sign fluctuations all yield minimally better estimates than fixed-length procedures, although most of the improvement occurs with short runs that render undependable estimates and the differences vanish when the procedures run for a number of trials (around 70) that ensures dependability. Thus, none of the indices considered here (some of which are widespread) has the diagnostic value that would justify its use. In addition, difficulties of implementation make sequential procedures unfit as alternatives to fixed-length procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose

The objective of our study was to test a new approach to approximating organ dose by using the effective energy of the combined 80kV/140kV beam used in fast kV switch dual-energy (DE) computed tomography (CT). The two primary focuses of the study were to first validate experimentally the dose equivalency between MOSFET and ion chamber (as a gold standard) in a fast kV switch DE environment, and secondly to estimate effective dose (ED) of DECT scans using MOSFET detectors and an anthropomorphic phantom.

Materials and Methods

A GE Discovery 750 CT scanner was employed using a fast-kV switch abdomen/pelvis protocol alternating between 80 kV and 140 kV. The specific aims of our study were to (1) Characterize the effective energy of the dual energy environment; (2) Estimate the f-factor for soft tissue; (3) Calibrate the MOSFET detectors using a beam with effective energy equal to the combined DE environment; (4) Validate our calibration by using MOSFET detectors and ion chamber to measure dose at the center of a CTDI body phantom; (5) Measure ED for an abdomen/pelvis scan using an anthropomorphic phantom and applying ICRP 103 tissue weighting factors; and (6) Estimate ED using AAPM Dose Length Product (DLP) method. The effective energy of the combined beam was calculated by measuring dose with an ion chamber under varying thicknesses of aluminum to determine half-value layer (HVL).

Results

The effective energy of the combined dual-energy beams was found to be 42.8 kV. After calibration, tissue dose in the center of the CTDI body phantom was measured at 1.71 ± 0.01 cGy using an ion chamber, and 1.73±0.04 and 1.69±0.09 using two separate MOSFET detectors. This result showed a -0.93% and 1.40 % difference, respectively, between ion chamber and MOSFET. ED from the dual-energy scan was calculated as 16.49 ± 0.04 mSv by the MOSFET method and 14.62 mSv by the DLP method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evidence suggests that inactivity during a hospital stay is associated with poor health outcomes in older medical inpatients. We aimed to estimate the associations of average daily step-count (walking) in hospital with physical performance and length of stay in this population. Medical in-patients aged ⩾65 years, premorbidly mobile, with an anticipated length of stay ⩾3 d, were recruited. Measurements included average daily step-count, continuously recorded until discharge, or for a maximum of 7 d (Stepwatch Activity Monitor); co-morbidity (CIRS-G); frailty (SHARE F-I); and baseline and end-of-study physical performance (short physical performance battery). Linear regression models were used to estimate associations between step-count and end-of-study physical performance or length of stay. Length of stay was log transformed in the first model, and step-count was log transformed in both models. Similar models were used to adjust for potential confounders. Data from 154 patients (mean 77 years, SD 7.4) were analysed. The unadjusted models estimated for each unit increase in the natural log of stepcount, the natural log of length of stay decreased by 0.18 (95% CI −0.27 to −0.09). After adjustment of potential confounders, while the strength of the inverse association was attenuated, it remained significant (βlog(steps) = −0.15, 95%CI −0.26 to −0.03). The back-transformed result suggested that a 50% increase in step-count was associated with a 6% shorter length of stay. There was no apparent association between step-count and end-of-study physical performance once baseline physical performance was adjusted for. The results indicate that step-count is independently associated with hospital length of stay, and merits further investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is an elaboration of the simplex identification via split augmented Lagrangian (SISAL) algorithm (Bioucas-Dias, 2009) to blindly unmix hyperspectral data. SISAL is a linear hyperspectral unmixing method of the minimum volume class. This method solve a non-convex problem by a sequence of augmented Lagrangian optimizations, where the positivity constraints, forcing the spectral vectors to belong to the convex hull of the endmember signatures, are replaced by soft constraints. With respect to SISAL, we introduce a dimensionality estimation method based on the minimum description length (MDL) principle. The effectiveness of the proposed algorithm is illustrated with simulated and real data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to increasing integration density and operating frequency of today's high performance processors, the temperature of a typical chip can easily exceed 100 degrees Celsius. However, the runtime thermal state of a chip is very hard to predict and manage due to the random nature in computing workloads, as well as the process, voltage and ambient temperature variability (together called PVT variability). The uneven nature (both in time and space) of the heat dissipation of the chip could lead to severe reliability issues and error-prone chip behavior (e.g. timing errors). Many dynamic power/thermal management techniques have been proposed to address this issue such as dynamic voltage and frequency scaling (DVFS), clock gating and etc. However, most of such techniques require accurate knowledge of the runtime thermal state of the chip to make efficient and effective control decisions. In this work we address the problem of tracking and managing the temperature of microprocessors which include the following sub-problems: (1) how to design an efficient sensor-based thermal tracking system on a given design that could provide accurate real-time temperature feedback; (2) what statistical techniques could be used to estimate the full-chip thermal profile based on very limited (and possibly noise-corrupted) sensor observations; (3) how do we adapt to changes in the underlying system's behavior, since such changes could impact the accuracy of our thermal estimation. The thermal tracking methodology proposed in this work is enabled by on-chip sensors which are already implemented in many modern processors. We first investigate the underlying relationship between heat distribution and power consumption, then we introduce an accurate thermal model for the chip system. Based on this model, we characterize the temperature correlation that exists among different chip modules and explore statistical approaches (such as those based on Kalman filter) that could utilize such correlation to estimate the accurate chip-level thermal profiles in real time. Such estimation is performed based on limited sensor information because sensors are usually resource constrained and noise-corrupted. We also took a further step to extend the standard Kalman filter approach to account for (1) nonlinear effects such as leakage-temperature interdependency and (2) varying statistical characteristics in the underlying system model. The proposed thermal tracking infrastructure and estimation algorithms could consistently generate accurate thermal estimates even when the system is switching among workloads that have very distinct characteristics. Through experiments, our approaches have demonstrated promising results with much higher accuracy compared to existing approaches. Such results can be used to ensure thermal reliability and improve the effectiveness of dynamic thermal management techniques.