919 resultados para Method of moments algorithm
Resumo:
The Modeling method of teaching has demonstrated well--‐documented success in the improvement of student learning. The teacher/researcher in this study was introduced to Modeling through the use of a technique called White Boarding. Without formal training, the researcher began using the White Boarding technique for a limited number of laboratory experiences with his high school physics classes. The question that arose and was investigated in this study is “What specific aspects of the White Boarding process support student understanding?” For the purposes of this study, the White Boarding process was broken down into three aspects – the Analysis of data through the use of Logger Pro software, the Preparation of White Boards, and the Presentations each group gave about their specific lab data. The lab used in this study, an Acceleration of Gravity Lab, was chosen because of the documented difficulties students experience in the graphing of motion. In the lab, students filmed a given motion, utilized Logger Pro software to analyze the motion, prepared a White Board that described the motion with position--‐time and velocity--‐time graphs, and then presented their findings to the rest of the class. The Presentation included a class discussion with minimal contribution from the teacher. The three different aspects of the White Boarding experience – Analysis, Preparation, and Presentation – were compared through the use of student learning logs, video analysis of the Presentations, and follow--‐up interviews with participants. The information and observations gathered were used to determine the level of understanding of each participant during each phase of the lab. The researcher then looked for improvement in the level of student understanding, the number of “aha” moments students had, and the students’ perceptions about which phase was most important to their learning. The results suggest that while all three phases of the White Boarding experience play a part in the learning process for students, the Presentations provided the most significant changes. The implications for instruction are discussed.
Resumo:
This dissertation concerns convergence analysis for nonparametric problems in the calculus of variations and sufficient conditions for weak local minimizer of a functional for both nonparametric and parametric problems. Newton's method in infinite-dimensional space is proved to be well-defined and converges quadratically to a weak local minimizer of a functional subject to certain boundary conditions. Sufficient conditions for global converges are proposed and a well-defined algorithm based on those conditions is presented and proved to converge. Finite element discretization is employed to achieve an implementable line-search-based quasi-Newton algorithm and a proof of convergence of the discretization of the algorithm is included. This work also proposes sufficient conditions for weak local minimizer without using the language of conjugate points. The form of new conditions is consistent with the ones in finite-dimensional case. It is believed that the new form of sufficient conditions will lead to simpler approaches to verify an extremal as local minimizer for well-known problems in calculus of variations.
Resumo:
wo methods for registering laser-scans of human heads and transforming them to a new semantically consistent topology defined by a user-provided template mesh are described. Both algorithms are stated within the Iterative Closest Point framework. The first method is based on finding landmark correspondences by iteratively registering the vicinity of a landmark with a re-weighted error function. Thin-plate spline interpolation is then used to deform the template mesh and finally the scan is resampled in the topology of the deformed template. The second algorithm employs a morphable shape model, which can be computed from a database of laser-scans using the first algorithm. It directly optimizes pose and shape of the morphable model. The use of the algorithm with PCA mixture models, where the shape is split up into regions each described by an individual subspace, is addressed. Mixture models require either blending or regularization strategies, both of which are described in detail. For both algorithms, strategies for filling in missing geometry for incomplete laser-scans are described. While an interpolation-based approach can be used to fill in small or smooth regions, the model-driven algorithm is capable of fitting a plausible complete head mesh to arbitrarily small geometry, which is known as "shape completion". The importance of regularization in the case of extreme shape completion is shown.
Resumo:
Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.
Resumo:
The effect of a traditional Ethiopian lupin processing method on the chemical composition of lupin seed samples was studied. Two sampling districts, namely Mecha and Sekela, representing the mid- and high-altitude areas of north-western Ethiopia, respectively, were randomly selected. Different types of traditionally processed and marketed lupin seed samples (raw, roasted, and fi nished) were collected in six replications from each district. Raw samples are unprocessed, and roasted samples are roasted using fi rewood. Finished samples are those ready for human consumption as snack. Thousand seed weight for raw and roasted samples within a study district was similar (P > 0.05), but it was lower (P < 0.01) for fi nished samples compared to raw and roasted samples. The crude fi bre content of fi nished lupin seed sample from Mecha was lower (P < 0.01) than that of raw and roasted samples. However, the different lupin samples from Sekela had similar crude fi bre content (P > 0.05). The crude protein and crude fat contents of fi nished samples within a study district were higher (P < 0.01) than those of raw and roasted samples, respectively. Roasting had no effect on the crude protein content of lupin seed samples. The crude ash content of raw and roasted lupin samples within a study district was higher (P < 0.01) than that of fi nished lupin samples of the respective study districts. The content of quinolizidine alkaloids of fi nished lupin samples was lower than that of raw and roasted samples. There was also an interaction effect between location and lupin sample type. The traditional processing method of lupin seeds in Ethiopia has a positive contribution improving the crude protein and crude fat content, and lowering the alkaloid content of the fi nished product. The study showed the possibility of adopting the traditional processing method to process bitter white lupin for the use as protein supplement in livestock feed in Ethiopia, but further work has to be done on the processing method and animal evaluation.
Resumo:
Purpose Ophthalmologists are confronted with a set of different image modalities to diagnose eye tumors e.g., fundus photography, CT and MRI. However, these images are often complementary and represent pathologies differently. Some aspects of tumors can only be seen in a particular modality. A fusion of modalities would improve the contextual information for diagnosis. The presented work attempts to register color fundus photography with MRI volumes. This would complement the low resolution 3D information in the MRI with high resolution 2D fundus images. Methods MRI volumes were acquired from 12 infants under the age of 5 with unilateral retinoblastoma. The contrast-enhanced T1-FLAIR sequence was performed with an isotropic resolution of less than 0.5mm. Fundus images were acquired with a RetCam camera. For healthy eyes, two landmarks were used: the optic disk and the fovea. The eyes were detected and extracted from the MRI volume using a 3D adaption of the Fast Radial Symmetry Transform (FRST). The cropped volume was automatically segmented using the Split Bregman algorithm. The optic nerve was enhanced by a Frangi vessel filter. By intersection the nerve with the retina the optic disk was found. The fovea position was estimated by constraining the position with the angle between the optic and the visual axis as well as the distance from the optic disk. The optical axis was detected automatically by fitting a parable on to the lens surface. On the fundus, the optic disk and the fovea were detected by using the method of Budai et al. Finally, the image was projected on to the segmented surface using the lens position as the camera center. In tumor affected eyes, the manually segmented tumors were used instead of the optic disk and macula for the registration. Results In all of the 12 MRI volumes that were tested the 24 eyes were found correctly, including healthy and pathological cases. In healthy eyes the optic nerve head was found in all of the tested eyes with an error of 1.08 +/- 0.37mm. A successful registration can be seen in figure 1. Conclusions The presented method is a step toward automatic fusion of modalities in ophthalmology. The combination enhances the MRI volume with higher resolution from the color fundus on the retina. Tumor treatment planning is improved by avoiding critical structures and disease progression monitoring is made easier.
Resumo:
Point Distribution Models (PDM) are among the most popular shape description techniques and their usefulness has been demonstrated in a wide variety of medical imaging applications. However, to adequately characterize the underlying modeled population it is essential to have a representative number of training samples, which is not always possible. This problem is especially relevant as the complexity of the modeled structure increases, being the modeling of ensembles of multiple 3D organs one of the most challenging cases. In this paper, we introduce a new GEneralized Multi-resolution PDM (GEM-PDM) in the context of multi-organ analysis able to efficiently characterize the different inter-object relations, as well as the particular locality of each object separately. Importantly, unlike previous approaches, the configuration of the algorithm is automated thanks to a new agglomerative landmark clustering method proposed here, which equally allows us to identify smaller anatomically significant regions within organs. The significant advantage of the GEM-PDM method over two previous approaches (PDM and hierarchical PDM) in terms of shape modeling accuracy and robustness to noise, has been successfully verified for two different databases of sets of multiple organs: six subcortical brain structures, and seven abdominal organs. Finally, we propose the integration of the new shape modeling framework into an active shape-model-based segmentation algorithm. The resulting algorithm, named GEMA, provides a better overall performance than the two classical approaches tested, ASM, and hierarchical ASM, when applied to the segmentation of 3D brain MRI.
Resumo:
Behavior is one of the most important indicators for assessing cattle health and well-being. The objective of this study was to develop and validate a novel algorithm to monitor locomotor behavior of loose-housed dairy cows based on the output of the RumiWatch pedometer (ITIN+HOCH GmbH, Fütterungstechnik, Liestal, Switzerland). Data of locomotion were acquired by simultaneous pedometer measurements at a sampling rate of 10 Hz and video recordings for manual observation later. The study consisted of 3 independent experiments. Experiment 1 was carried out to develop and validate the algorithm for lying behavior, experiment 2 for walking and standing behavior, and experiment 3 for stride duration and stride length. The final version was validated, using the raw data, collected from cows not included in the development of the algorithm. Spearman correlation coefficients were calculated between accelerometer variables and respective data derived from the video recordings (gold standard). Dichotomous data were expressed as the proportion of correctly detected events, and the overall difference for continuous data was expressed as the relative measurement error. The proportions for correctly detected events or bouts were 1 for stand ups, lie downs, standing bouts, and lying bouts and 0.99 for walking bouts. The relative measurement error and Spearman correlation coefficient for lying time were 0.09% and 1; for standing time, 4.7% and 0.96; for walking time, 17.12% and 0.96; for number of strides, 6.23% and 0.98; for stride duration, 6.65% and 0.75; and for stride length, 11.92% and 0.81, respectively. The strong to very high correlations of the variables between visual observation and converted pedometer data indicate that the novel RumiWatch algorithm may markedly improve automated livestock management systems for efficient health monitoring of dairy cows.
Resumo:
Asynchronous level crossing sampling analog-to-digital converters (ADCs) are known to be more energy efficient and produce fewer samples than their equidistantly sampling counterparts. However, as the required threshold voltage is lowered, the number of samples and, in turn, the data rate and the energy consumed by the overall system increases. In this paper, we present a cubic Hermitian vector-based technique for online compression of asynchronously sampled electrocardiogram signals. The proposed method is computationally efficient data compression. The algorithm has complexity O(n), thus well suited for asynchronous ADCs. Our algorithm requires no data buffering, maintaining the energy advantage of asynchronous ADCs. The proposed method of compression has a compression ratio of up to 90% with achievable percentage root-mean-square difference ratios as a low as 0.97. The algorithm preserves the superior feature-to-feature timing accuracy of asynchronously sampled signals. These advantages are achieved in a computationally efficient manner since algorithm boundary parameters for the signals are extracted a priori.
Resumo:
Genetic anticipation is defined as a decrease in age of onset or increase in severity as the disorder is transmitted through subsequent generations. Anticipation has been noted in the literature for over a century. Recently, anticipation in several diseases including Huntington's Disease, Myotonic Dystrophy and Fragile X Syndrome were shown to be caused by expansion of triplet repeats. Anticipation effects have also been observed in numerous mental disorders (e.g. Schizophrenia, Bipolar Disorder), cancers (Li-Fraumeni Syndrome, Leukemia) and other complex diseases. ^ Several statistical methods have been applied to determine whether anticipation is a true phenomenon in a particular disorder, including standard statistical tests and newly developed affected parent/affected child pair methods. These methods have been shown to be inappropriate for assessing anticipation for a variety of reasons, including familial correlation and low power. Therefore, we have developed family-based likelihood modeling approaches to model the underlying transmission of the disease gene and penetrance function and hence detect anticipation. These methods can be applied in extended families, thus improving the power to detect anticipation compared with existing methods based only upon parents and children. The first method we have proposed is based on the regressive logistic hazard model. This approach models anticipation by a generational covariate. The second method allows alleles to mutate as they are transmitted from parents to offspring and is appropriate for modeling the known triplet repeat diseases in which the disease alleles can become more deleterious as they are transmitted across generations. ^ To evaluate the new methods, we performed extensive simulation studies for data simulated under different conditions to evaluate the effectiveness of the algorithms to detect genetic anticipation. Results from analysis by the first method yielded empirical power greater than 87% based on the 5% type I error critical value identified in each simulation depending on the method of data generation and current age criteria. Analysis by the second method was not possible due to the current formulation of the software. The application of this method to Huntington's Disease and Li-Fraumeni Syndrome data sets revealed evidence for a generation effect in both cases. ^
Resumo:
Sexually transmitted infections (STIs) are a major public health problem, and controlling their spread is a priority. According to the World Health Organization (WHO), there are 340 million new cases of treatable STIs among 15–49 year olds that occur yearly around the world (1). Infection with STIs can lead to several complications such as pelvic inflammatory disorder (PID), cervical cancer, infertility, ectopic pregnancy, and even death (1). Additionally, STIs and associated complications are among the top disease types for which healthcare is sought in developing nations (1), and according to the UNAIDS report, there is a strong connection between STIs and the sexual spread of HIV infection (2). In fact, it is estimated that the presence of an untreated STI can increase the likelihood of contracting and spreading HIV by a factor up to 10 (2). In addition, developing countries are poorer in resources and lack inexpensive and precise diagnostic laboratory tests for STIs, thereby exacerbating the problem. Thus, the WHO recommends syndromic management of STIs for delivering care where lab testing is scarce or unattainable (1). This approach utilizes the use of an easy to use algorithm to help healthcare workers recognize symptoms/signs so as to provide treatment for the likely cause of the syndrome. Furthermore, according to the WHO, syndromic management offers instant and legitimate treatment compared to clinical diagnosis, and that it is also more cost-effective for some syndromes over the use of laboratory testing (1). In addition, even though it has been shown that the vaginal discharge syndrome has low specificity for gonorrhea and Chlamydia and can lead to over treatment (1), this is the recommended way to manage STIs in developing nations. Thus, the purpose of this paper is to specifically address the following questions: is syndromic management working to lower the STI burden in developing nations? How effective is it, and should it still be recommended? To answer these questions, a systematic literature review was conducted to evaluate the current effectiveness of syndromic management in developing nations. This review examined published articles over the past 5 years that compared syndromic management to laboratory testing and had published sensitivity, specificity, and positive predicative value data. Focusing mainly on vaginal discharge, urethral discharge, and genital ulcer algorithms, it was seen that though syndromic management is more effective in diagnosing and treating urethral and genial ulcer syndromes in men, there still remains an urgent need to revise the WHO recommendations for managing STIs in developing nations. Current studies have continued to show decreased specificity, sensitivity and positive predicative values for the vaginal discharge syndrome, and high rates of asymptomatic infections and healthcare workers neglecting to follow guidelines limit the usefulness of syndromic management. Furthermore, though advocate d as cost-effective by the WHO, there is a cost incurred from treating uninfected people. Instead of improving this system, it is recommended that better and less expensive point of care and the development of rapid test diagnosis kits be the focus and method of diagnosis and treatment in developing nations for STI management. ^
Resumo:
Ice shelves strongly impact coastal Antarctic sea-ice and the associated ecosystem through the formation of a sub-sea-ice platelet layer. Although progress has been made in determining and understanding its spatio-temporal variability based on point measurements, an investigation of this phenomenon on a larger scale remains a challenge due to logistical constraints and a lack of suitable methodology. In this study, we applied a laterally-constrained Marquardt-Levenberg inversion to a unique multi-frequency electromagnetic (EM) induction sounding dataset obtained on the landfast sea ice of Atka Bay, eastern Weddell Sea, in 2012. In addition to consistent fast-ice thickness and -conductivities along > 100 km transects; we present the first comprehensive, high resolution platelet-layer thickness and -conductivity dataset recorded on Antarctic sea ice. The reliability of the algorithm was confirmed by using synthetic data, and the inverted platelet-layer thicknesses agreed within the data uncertainty to drill-hole measurements. Ice-volume fractions were calculated from platelet-layer conductivities, revealing that an older and thicker platelet layer is denser and more compacted than a loosely attached, young platelet layer. The overall platelet-layer volume below Atka Bay fast ice suggests that the contribution of ocean/ice-shelf interaction to sea-ice volume in this region is even higher than previously thought. This study also implies that multi-frequency EM induction sounding is an effective approach in determining platelet layer volume on a larger scale than previously feasible. When applied to airborne multi-frequency EM, this method could provide a step towards an Antarctic-wide quantification of ocean/ice-shelf interaction.
Resumo:
This paper describes new approaches to improve the local and global approximation (matching) and modeling capability of Takagi–Sugeno (T-S) fuzzy model. The main aim is obtaining high function approximation accuracy and fast convergence. The main problem encountered is that T-S identification method cannot be applied when the membership functions are overlapped by pairs. This restricts the application of the T-S method because this type of membership function has been widely used during the last 2 decades in the stability, controller design of fuzzy systems and is popular in industrial control applications. The approach developed here can be considered as a generalized version of T-S identification method with optimized performance in approximating nonlinear functions. We propose a noniterative method through weighting of parameters approach and an iterative algorithm by applying the extended Kalman filter, based on the same idea of parameters’ weighting. We show that the Kalman filter is an effective tool in the identification of T-S fuzzy model. A fuzzy controller based linear quadratic regulator is proposed in order to show the effectiveness of the estimation method developed here in control applications. An illustrative example of an inverted pendulum is chosen to evaluate the robustness and remarkable performance of the proposed method locally and globally in comparison with the original T-S model. Simulation results indicate the potential, simplicity, and generality of the algorithm. An illustrative example is chosen to evaluate the robustness. In this paper, we prove that these algorithms converge very fast, thereby making them very practical to use.
Resumo:
A method to analyze parabolic reflectors with arbitrary piecewise rim is presented in this communication. This kind of reflectors, when operating as collimators in compact range facilities, needs to be large in terms of wavelength. Their analysis is very inefficient, when it is carried out with fullwave/MoM techniques, and it is not very appropriate for designing with PO techniques. Also, fast GO formulations do not offer enough accuracy to reach performance results. The proposed algorithm is based on a GO-PWS hybrid scheme, using analytical as well as non-analytical formulations. On one side, an analytical treatment of the polygonal rim reflectors is carried out. On the other side, non-analytical calculi are based on efficient operations, such as M2 order 2-dimensional FFT. A combination of these two techniques in the algorithm ensures real ad-hoc design capabilities, reached through analysis speedup. The purpose of the algorithm is to obtain an optimal conformal serrated-edge reflector design through the analysis of the field quality within the quiet zone that it is able to generate in its forward half space.
Resumo:
Zernike polynomials are a well known set of functions that find many applications in image or pattern characterization because they allow to construct shape descriptors that are invariant against translations, rotations or scale changes. The concepts behind them can be extended to higher dimension spaces, making them also fit to describe volumetric data. They have been less used than their properties might suggest due to their high computational cost. We present a parallel implementation of 3D Zernike moments analysis, written in C with CUDA extensions, which makes it practical to employ Zernike descriptors in interactive applications, yielding a performance of several frames per second in voxel datasets about 2003 in size. In our contribution, we describe the challenges of implementing 3D Zernike analysis in a general-purpose GPU. These include how to deal with numerical inaccuracies, due to the high precision demands of the algorithm, or how to deal with the high volume of input data so that it does not become a bottleneck for the system.