900 resultados para exponential Rosenbrock-type methods
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Plane wave discontinuous Galerkin methods for the 2D Helmholtz equation: analysis of the $p$-version
Resumo:
Plane wave discontinuous Galerkin (PWDG) methods are a class of Trefftz-type methods for the spatial discretization of boundary value problems for the Helmholtz operator $-\Delta-\omega^2$, $\omega>0$. They include the so-called ultra weak variational formulation from [O. Cessenat and B. Després, SIAM J. Numer. Anal., 35 (1998), pp. 255–299]. This paper is concerned with the a priori convergence analysis of PWDG in the case of $p$-refinement, that is, the study of the asymptotic behavior of relevant error norms as the number of plane wave directions in the local trial spaces is increased. For convex domains in two space dimensions, we derive convergence rates, employing mesh skeleton-based norms, duality techniques from [P. Monk and D. Wang, Comput. Methods Appl. Mech. Engrg., 175 (1999), pp. 121–136], and plane wave approximation theory.
Resumo:
A numerical study of mass conservation of MAC-type methods is presented, for viscoelastic free-surface flows. We use an implicit formulation which allows for greater time steps, and therefore time marching schemes for advecting the free surface marker particles have to be accurate in order to preserve the good mass conservation properties of this methodology. We then present an improvement by using a Runge-Kutta scheme coupled with a local linear extrapolation on the free surface. A thorough study of the viscoelastic impacting drop problem, for both Oldroyd-B and XPP fluid models, is presented, investigating the influence of timestep, grid spacing and other model parameters to the overall mass conservation of the method. Furthermore, an unsteady fountain flow is also simulated to illustrate the low mass conservation error obtained.
Resumo:
Objective: The study aimed to identify the risk factors involved in initiating thromboembolism (TE) in pancreatic cancer (PC) patients, with focus on ABO blood type. ^ Methods and Patients: There were 35.7% confirmed cases of TE and 64.3% cases remained free of TE (n=687). There were 12.7% only Pulmonary embolism (PE), 9% only Deep vein thrombosis (DVT), 53.5% only other sites, 3.3% combined PE and DVT, 8.6% combined PE and other sites, 9.8% combined DVT and other sites, and 3.3% all three combined cases. ^ Results: The risk factors for thrombosis identified by multivariate logistic regression were: history of previous anti-thrombotic treatment, tumor site in pancreatic body or tail, large tumor size, maximum glucose category more than 126 and 200 mg/dL. ^ The factors with worse overall survival by multivariate Cox regression and Kaplan Meier analyses were: locally advanced or metastatic stage, worsening performance status, high CA 19-9 levels, and HbA1C levels more than 6 %, at diagnosis. ^ There were 29.1% and 39.1% of the patients with thrombosis in the O and non-O blood type groups respectively. Both Non-O blood type (P=0.02) and the A, B and AB blood types (P= 0.007) were associated with thrombosis as compared to O type. The odds of thrombosis were nearly half in O blood type patients as compared to non-O blood type [OR-0.54 (95% C.I.- 0.37-0.79), P<0.001]. ^ Conclusion: A better understanding of the TE and PC relationship and involved risk factors may provide insights on tumor biology and patient response to prophylactic anticoagulation therapy.^
Resumo:
Objective: Current prevalence of smoking, even where data are available, is a poor proxy for cumulative hazards of smoking, which depend on several factors including the age at which smoking began, duration of smoking, number of cigarettes smoked per day, degree of inhalation, and cigarette characteristics such as tar and nicotine content or filter type. Methods: We extended the Peto-Lopez smoking impact ratio method to estimate accumulated hazards of smoking for different regions of the world. Lung cancer mortality data were obtained from the Global Burden of Disease mortality database. The American Cancer Society Cancer Prevention Study, phase 11 (CPS-II) with follow up for the years 1982 to 1988 was the reference population. For the global application of the method, never-smoker lung cancer mortality rates were chosen based on the estimated use of coal for household energy in each region. Results: Men in industrialised countries of Europe, North America, and the Western Pacific had the largest accumulated hazards of smoking. Young and middle age males in many regions of the developing world also had large smoking risks. The accumulated hazards of smoking for women were highest in North America followed by Europe. Conclusions: In the absence of detailed data on smoking prevalence and history, lung cancer mortality provides a robust indicator of the accumulated hazards of smoking. These hazards in developing countries are currently more concentrated among young and middle aged males.
Resumo:
Solving systems of nonlinear equations is a very important task since the problems emerge mostly through the mathematical modelling of real problems that arise naturally in many branches of engineering and in the physical sciences. The problem can be naturally reformulated as a global optimization problem. In this paper, we show that a self-adaptive combination of a metaheuristic with a classical local search method is able to converge to some difficult problems that are not solved by Newton-type methods.
Resumo:
OBJECTIVE - To determine the prevalence of hyperhomocystinemia in patients with acute ischemic syndrome of the unstable angina type. METHODS - We prospectively studied 46 patients (24 females) with unstable angina and 46 control patients (19 males), paired by sex and age, blinded to the laboratory data. Details of diets, smoking habits, medication used, body mass index, and the presence of hypertension and diabetes were recorded, as were plasma lipid and glucose levels, C-reactive protein, and lipoperoxidation in all participants. Patients with renal disease were excluded. Plasma homocysteine was estimated using high-pressure liquid chromatography. RESULTS - Plasma homocysteine levels were significantly higher in the group of patients with unstable angina (12.7±6.7 µmol/L) than in the control group (8.7±4.4 µmol/L) (p<0.05). Among males, homocystinemia was higher in the group with unstable angina than in the control group, but this difference was not statistically significant (14.1±5.9 µmol/L versus 11.9±4.2 µmol/L). Among females, however, a statistically significant difference was observed between the 2 groups: 11.0±7.4 µmol/L versus 6.4±2.9 µmol/L (p<0.05) in the unstable angina and control groups, respectively. Approximately 24% of the patients had unstable angina at homocysteine levels above 15 µmol/L. CONCLUSION - High homocysteine levels seem to be a relevant prevalent factor in the population with unstable angina, particularly among females.
Resumo:
The analysis of multiexponential decays is challenging because of their complex nature. When analyzing these signals, not only the parameters, but also the orders of the models, have to be estimated. We present an improved spectroscopic technique specially suited for this purpose. The proposed algorithm combines an iterative linear filter with an iterative deconvolution method. A thorough analysis of the noise effect is presented. The performance is tested with synthetic and experimental data.
Resumo:
With the trend in molecular epidemiology towards both genome-wide association studies and complex modelling, the need for large sample sizes to detect small effects and to allow for the estimation of many parameters within a model continues to increase. Unfortunately, most methods of association analysis have been restricted to either a family-based or a case-control design, resulting in the lack of synthesis of data from multiple studies. Transmission disequilibrium-type methods for detecting linkage disequilibrium from family data were developed as an effective way of preventing the detection of association due to population stratification. Because these methods condition on parental genotype, however, they have precluded the joint analysis of family and case-control data, although methods for case-control data may not protect against population stratification and do not allow for familial correlations. We present here an extension of a family-based association analysis method for continuous traits that will simultaneously test for, and if necessary control for, population stratification. We further extend this method to analyse binary traits (and therefore family and case-control data together) and accurately to estimate genetic effects in the population, even when using an ascertained family sample. Finally, we present the power of this binary extension for both family-only and joint family and case-control data, and demonstrate the accuracy of the association parameter and variance components in an ascertained family sample.
Resumo:
Objective: Jaundice is the clinical manifestation, of hyperbilirubinemia. It is considered as a sign of either a liver disease or, less often, of a hemolytic disorder. It can be divided into obstructive and non obstructive type, involving increase of indirect (non-conjugated) bilirubin or increase of direct (conjugated) bilirubin, respectively, but it can be also manifested as mixed type. Methods: This article updates the current knoweledge concerning the jaundice's etiology, pathophysiological mechanisms, and complications ant treatment by reviewing of the latest medical literature. It also presents an approach of jaundice's treatment and pathogenesis, in special populations as in neonates and pregnant women. Results: The treatment is consistent in the management of the subjective diseases responsible for the jaundice and its complications.The clinical prognosis of the jaundice depends on the etiology. Surgical treatment of jaundiced patients is associated with high mortality and morbidity rates. Studies have shown that the severity of jaundice and the presence of malignant disease are importan risk factors for post-operative mortality. Conclusions: Early detection of jaundice is of vital importance because of its involvement in malignancy or in other benign conditions requiring immediate treatment in order to avoid further complications.
Resumo:
A comparative performance analysis of four geolocation methods in terms of their theoretical root mean square positioning errors is provided. Comparison is established in two different ways: strict and average. In the strict type, methods are examined for a particular geometric configuration of base stations(BSs) with respect to mobile position, which determines a givennoise profile affecting the respective time-of-arrival (TOA) or timedifference-of-arrival (TDOA) estimates. In the average type, methodsare evaluated in terms of the expected covariance matrix ofthe position error over an ensemble of random geometries, so thatcomparison is geometry independent. Exact semianalytical equationsand associated lower bounds (depending solely on the noiseprofile) are obtained for the average covariance matrix of the positionerror in terms of the so-called information matrix specific toeach geolocation method. Statistical channel models inferred fromfield trials are used to define realistic prior probabilities for therandom geometries. A final evaluation provides extensive resultsrelating the expected position error to channel model parametersand the number of base stations.
Resumo:
PURPOSE: The purpose of our study was to assess whether a model combining clinical factors, MR imaging features, and genomics would better predict overall survival of patients with glioblastoma (GBM) than either individual data type. METHODS: The study was conducted leveraging The Cancer Genome Atlas (TCGA) effort supported by the National Institutes of Health. Six neuroradiologists reviewed MRI images from The Cancer Imaging Archive (http://cancerimagingarchive.net) of 102 GBM patients using the VASARI scoring system. The patients' clinical and genetic data were obtained from the TCGA website (http://www.cancergenome.nih.gov/). Patient outcome was measured in terms of overall survival time. The association between different categories of biomarkers and survival was evaluated using Cox analysis. RESULTS: The features that were significantly associated with survival were: (1) clinical factors: chemotherapy; (2) imaging: proportion of tumor contrast enhancement on MRI; and (3) genomics: HRAS copy number variation. The combination of these three biomarkers resulted in an incremental increase in the strength of prediction of survival, with the model that included clinical, imaging, and genetic variables having the highest predictive accuracy (area under the curve 0.679±0.068, Akaike's information criterion 566.7, P<0.001). CONCLUSION: A combination of clinical factors, imaging features, and HRAS copy number variation best predicts survival of patients with GBM.
Resumo:
We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.
Resumo:
Introduction Provoked vestibulodynia (PVD) is a prevalent genital pain syndrome that has been assumed to be chronic, with little spontaneous remission. Despite this assumption, there is a dearth of empirical evidence regarding the progression of PVD in a natural setting. Although many treatments are available, there is no single treatment that has demonstrated efficacy above others. Aims The aims of this secondary analysis of a prospective study were to (i) assess changes over a 2-year period in pain, depressive symptoms, and sexual outcomes in women with PVD; and (ii) examine changes based on treatment(s) type. Methods Participants completed questionnaire packages at Time 1 and a follow-up package 2 years later. Main Outcome Measures Visual analog scale of genital pain, Global Measure of Sexual Satisfaction, Female Sexual Function Index, Beck Depression Inventory, Dyadic Adjustment Scale, and sexual intercourse attempts over the past month. Results Two hundred thirty-nine women with PVD completed both time one and two questionnaires. For the sample as a whole, there was significant improvement over 2 years on pain ratings, sexual satisfaction, sexual function, and depressive symptoms. The most commonly received treatments were physical therapy, sex/psychotherapy, and medical treatment, although 41.0% did not undergo any treatment. Women receiving no treatment also improved significantly on pain ratings. No single treatment type predicted better outcome for any variable except depressive symptoms, in which women who underwent surgery were more likely to improve. Discussion These results suggest that PVD may significantly reduce in severity over time. Participants demonstrated clinically significant pain improvement, even when they did not receive treatment. Furthermore, the only single treatment type predicting better outcomes was surgery, and only for depressive symptoms, accounting for only 2.3% of the variance. These data do not demonstrate the superiority of any one treatment and underscore the need to have control groups in PVD treatment trials, otherwise improvements may simply be the result of natural progression.