203 resultados para Smoothness
Resumo:
Liposoluble vitamin C derivatives, such as tetra-isopalmitoyl ascorbic acid (IPAA), are often used in dermocosmetic products due to their higher stability than vitamin C free form as well as its proposed effects in skin; however, there are no studies analyzing IPAA stability or its in vivo effects when present in dermocosmetic formulations. Thus, this study aimed to evaluate chemical stability and pre-clinical and clinical efficacy of dermocosmetic formulations containing IPAA in skin hydration and microrelief. Chemical stability of the formulations added with 1% IPAA was evaluated by heat stress during 35 days by HPLC. For pre-clinical evaluation, experimental formulations were topically applied on hairless skin mice during 5 days and animal skins were analyzed by non-invasive biophysic techniques (water content of stratum corneum, TEWL, viscoelasticity, and microrelief) and by histopathological studies. For clinical efficacy tests, the formulations were topically applied to the forearm and face of human volunteers, and 3 h and 15 days after applications, the skins were evaluated by the same non-invasive techniques mentioned before. Results showed that formulations containing IPAA had medium stability and had pronounced moisturizing effects on stratum corneum and on viable epidermis. These formulations also improved skin microrelief especially in relation to skin smoothness and roughness. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
A new method for analysis of scattering data from lamellar bilayer systems is presented. The method employs a form-free description of the cross-section structure of the bilayer and the fit is performed directly to the scattering data, introducing also a structure factor when required. The cross-section structure (electron density profile in the case of X-ray scattering) is described by a set of Gaussian functions and the technique is termed Gaussian deconvolution. The coefficients of the Gaussians are optimized using a constrained least-squares routine that induces smoothness of the electron density profile. The optimization is coupled with the point-of-inflection method for determining the optimal weight of the smoothness. With the new approach, it is possible to optimize simultaneously the form factor, structure factor and several other parameters in the model. The applicability of this method is demonstrated by using it in a study of a multilamellar system composed of lecithin bilayers, where the form factor and structure factor are obtained simultaneously, and the obtained results provided new insight into this very well known system.
Resumo:
We study the action of a weighted Fourier–Laplace transform on the functions in the reproducing kernel Hilbert space (RKHS) associated with a positive definite kernel on the sphere. After defining a notion of smoothness implied by the transform, we show that smoothness of the kernel implies the same smoothness for the generating elements (spherical harmonics) in the Mercer expansion of the kernel. We prove a reproducing property for the weighted Fourier–Laplace transform of the functions in the RKHS and embed the RKHS into spaces of smooth functions. Some relevant properties of the embedding are considered, including compactness and boundedness. The approach taken in the paper includes two important notions of differentiability characterized by weighted Fourier–Laplace transforms: fractional derivatives and Laplace–Beltrami derivatives.
Resumo:
[EN] The accuracy and performance of current variational optical ow methods have considerably increased during the last years. The complexity of these techniques is high and enough care has to be taken for the implementation. The aim of this work is to present a comprehensible implementation of recent variational optical flow methods. We start with an energy model that relies on brightness and gradient constancy terms and a ow-based smoothness term. We minimize this energy model and derive an e cient implicit numerical scheme. In the experimental results, we evaluate the accuracy and performance of this implementation with the Middlebury benchmark database. We show that it is a competitive solution with respect to current methods in the literature. In order to increase the performance, we use a simple strategy to parallelize the execution on multi-core processors.
Resumo:
[EN] In this paper we show that a classic optical flow technique by Nagel and Enkelmann can be regarded as an early anisotropic diffusion method with a diffusion tensor. We introduce three improvements into the model formulation that avoid inconsistencies caused by centering the brightness term and the smoothness term in different images use a linear scale-space focusing strategy from coarse to fine scales for avoiding convergence to physically irrelevant local minima, and create an energy functional that is invariant under linear brightness changes. Applying a gradient descent method to the resulting energy functional leads to a system of diffusion-reaction equations. We prove that this system has a unique solution under realistic assumptions on the initial data, and we present an efficient linear implicit numerical scheme in detail. Our method creates flow fields with 100% density over the entire image domain, it is robust under a large range of parameter variations, and it can recover displacement fields that are far beyond the typical one-pixel limits which are characteristic for many differential methods for determining optical flow. We show that it performs better than the classic optical flow methods with 100% density that are evaluated by Barron et al. (1994). Our software is available from the Internet.
Resumo:
[EN]The aim of this work is to study several strategies for the preservation of flow discontinuities in variational optical flow methods. We analyze the combination of robust functionals and diffusion tensors in the smoothness assumption. Our study includes the use of tensors based on decreasing functions, which has shown to provide good results. However, it presents several limitations and usually does not perform better than other basic approaches. It typically introduces instabilities in the computed motion fields in the form of independent \textit{blobs} of vectors with large magnitude...
Resumo:
[EN]Isogeometric analysis (IGA) has arisen as an attempt to unify the fields of CAD and classical finite element methods. The main idea of IGA consists in using for analysis the same functions (splines) that are used in CAD representation of the geometry. The main advantage with respect to the traditional finite element method is a higher smoothness of the numerical solution and more accurate representation of the geometry. IGA seems to be a promising tool with wide range of applications in engineering. However, this relatively new technique have some open problems that require a solution. In this work we present our results and contributions to this issue…
Resumo:
Impairment of postural control is a common consequence of Parkinson's disease (PD) that becomes more and more critical with the progression of the disease, in spite of the available medications. Postural instability is one of the most disabling features of PD and induces difficulties with postural transitions, initiation of movements, gait disorders, inability to live independently at home, and is the major cause of falls. Falls are frequent (with over 38% falling each year) and may induce adverse consequences like soft tissue injuries, hip fractures, and immobility due to fear of falling. As the disease progresses, both postural instability and fear of falling worsen, which leads patients with PD to become increasingly immobilized. The main aims of this dissertation are to: 1) detect and assess, in a quantitative way, impairments of postural control in PD subjects, investigate the central mechanisms that control such motor performance, and how these mechanism are affected by levodopa; 2) develop and validate a protocol, using wearable inertial sensors, to measure postural sway and postural transitions prior to step initiation; 3) find quantitative measures sensitive to impairments of postural control in early stages of PD and quantitative biomarkers of disease progression; and 4) test the feasibility and effects of a recently-developed audio-biofeedback system in maintaining balance in subjects with PD. In the first set of studies, we showed how PD reduces functional limits of stability as well as the magnitude and velocity of postural preparation during voluntary, forward and backward leaning while standing. Levodopa improves the limits of stability but not the postural strategies used to achieve the leaning. Further, we found a strong relationship between backward voluntary limits of stability and size of automatic postural response to backward perturbations in control subjects and in PD subjects ON medication. Such relation might suggest that the central nervous system presets postural response parameters based on perceived maximum limits and this presetting is absent in PD patients OFF medication but restored with levodopa replacement. Furthermore, we investigated how the size of preparatory postural adjustments (APAs) prior to step initiation depend on initial stance width. We found that patients with PD did not scale up the size of their APA with stance width as much as control subjects so they had much more difficulty initiating a step from a wide stance than from a narrow stance. This results supports the hypothesis that subjects with PD maintain a narrow stance as a compensation for their inability to sufficiently increase the size of their lateral APA to allow speedy step initiation in wide stance. In the second set of studies, we demonstrated that it is possible to use wearable accelerometers to quantify postural performance during quiet stance and step initiation balance tasks in healthy subjects. We used a model to predict center of pressure displacements associated with accelerations at the upper and lower back and thigh. This approach allows the measurement of balance control without the use of a force platform outside the laboratory environment. We used wearable accelerometers on a population of early, untreated PD patients, and found that postural control in stance and postural preparation prior to a step are impaired early in the disease when the typical balance and gait intiation symptoms are not yet clearly manifested. These novel results suggest that technological measures of postural control can be more sensitive than clinical measures. Furthermore, we assessed spontaneous sway and step initiation longitudinally across 1 year in patients with early, untreated PD. We found that changes in trunk sway, and especially movement smoothness, measured as Jerk, could be used as an objective measure of PD and its progression. In the third set of studies, we studied the feasibility of adapting an existing audio-biofeedback device to improve balance control in patients with PD. Preliminary results showed that PD subjects found the system easy-to-use and helpful, and they were able to correctly follow the audio information when available. Audiobiofeedback improved the properties of trunk sway during quiet stance. Our results have many implications for i) the understanding the central mechanisms that control postural motor performance, and how these mechanisms are affected by levodopa; ii) the design of innovative protocols for measuring and remote monitoring of motor performance in the elderly or subjects with PD; and iii) the development of technologies for improving balance, mobility, and consequently quality of life in patients with balance disorders, such as PD patients with augmented biofeedback paradigms.
Resumo:
In order to synthesize proton-conducting materials which retain acids in the membrane during fuel cell operating conditions, the synthesis of poly(vinylphosphonic acid) grafted polybenzimidazole (PVPA grafted PBI) and the fabrication of multilayer membranes are mainly focussed in this dissertation. Synthesis of PVPA grafted PBI membrane can be done according to "grafting through" method. In "grafting through" method (or macromonomer method), monomer (e.g., vinylphosphonic acid) is radically copolymerized with olefin group attached macromonomer (e.g., allyl grafted PBI and vinylbenzyl grafted PBI). This approach is inherently limited to synthesize graft-copolymer with well-defined architectural and structural parameters. The incorporation of poly(vinylphosphonic acid) into PBI lead to improvements in proton conductivity up to 10-2 S/cm. Regarding multilayer membranes, the proton conducting layer-by-layer (LBL) assembly of polymers by various strong acids such as poly(vinylphosphonic acid), poly(vinylsulfonic acid) and poly(styrenesulfonic acid) paired with basic polymers such as poly(4-vinylimidazole) and poly(benzimidazole), which are appropriate for ‘Proton Exchange Membranes for Fuel Cell’ applications have been described. Proton conductivity increases with increasing smoothness of the film and the maximum measured conductivity was 10-4 S/cm at 25°C. Recently, anhydrous proton-conducting membranes with flexible structural backbones, which show proton-conducting properties comparable to Nafion have been focus of current research. The flexible backbone of polymer chains allow for a high segmental mobility and thus, a sufficiently low glass transition temperature (Tg), which is an essential factor to reach highly conductive systems. Among the polymers with a flexible chain backbone, poly(vinylphosphonic acid), poly(vinylbenzylphosphonic acid), poly(2-vinylbenzimidazole), poly(4-styrenesulfonic acid), poly(4-vinylimidazole), poly(4-vinylimidazole-co-vinylphosphonic acid) and poly(4-vinylimidazole-co-4-styrenesulfonic acid) are interesting materials for fuel cell applications. Synthesis of polybenzimidazole with anthracene structural unit was carried out in order to avoid modification reaction in the imidazole ring, because anthracene would encourage the modification reaction with an olefin by Diels-Alder reaction.
Resumo:
Given a reductive group G acting on an affine scheme X over C and a Hilbert function h: Irr G → N_0, we construct the moduli space M_Ө(X) of Ө-stable (G,h)-constellations on X, which is a common generalisation of the invariant Hilbert scheme after Alexeev and Brion and the moduli space of Ө-stable G-constellations for finite groups G introduced by Craw and Ishii. Our construction of a morphism M_Ө(X) → X//G makes this moduli space a candidate for a resolution of singularities of the quotient X//G. Furthermore, we determine the invariant Hilbert scheme of the zero fibre of the moment map of an action of Sl_2 on (C²)⁶ as one of the first examples of invariant Hilbert schemes with multiplicities. While doing this, we present a general procedure for the realisation of such calculations. We also consider questions of smoothness and connectedness and thereby show that our Hilbert scheme gives a resolution of singularities of the symplectic reduction of the action.
Resumo:
This paper presents a kernel density correlation based nonrigid point set matching method and shows its application in statistical model based 2D/3D reconstruction of a scaled, patient-specific model from an un-calibrated x-ray radiograph. In this method, both the reference point set and the floating point set are first represented using kernel density estimates. A correlation measure between these two kernel density estimates is then optimized to find a displacement field such that the floating point set is moved to the reference point set. Regularizations based on the overall deformation energy and the motion smoothness energy are used to constraint the displacement field for a robust point set matching. Incorporating this non-rigid point set matching method into a statistical model based 2D/3D reconstruction framework, we can reconstruct a scaled, patient-specific model from noisy edge points that are extracted directly from the x-ray radiograph by an edge detector. Our experiment conducted on datasets of two patients and six cadavers demonstrates a mean reconstruction error of 1.9 mm
Resumo:
This paper presents a new approach for reconstructing a patient-specific shape model and internal relative intensity distribution of the proximal femur from a limited number (e.g., 2) of calibrated C-arm images or X-ray radiographs. Our approach uses independent shape and appearance models that are learned from a set of training data to encode the a priori information about the proximal femur. An intensity-based non-rigid 2D-3D registration algorithm is then proposed to deformably fit the learned models to the input images. The fitting is conducted iteratively by minimizing the dissimilarity between the input images and the associated digitally reconstructed radiographs of the learned models together with regularization terms encoding the strain energy of the forward deformation and the smoothness of the inverse deformation. Comprehensive experiments conducted on images of cadaveric femurs and on clinical datasets demonstrate the efficacy of the present approach.
Resumo:
We investigate the interplay of smoothness and monotonicity assumptions when estimating a density from a sample of observations. The nonparametric maximum likelihood estimator of a decreasing density on the positive half line attains a rate of convergence at a fixed point if the density has a negative derivative. The same rate is obtained by a kernel estimator, but the limit distributions are different. If the density is both differentiable and known to be monotone, then a third estimator is obtained by isotonization of a kernel estimator. We show that this again attains the rate of convergence and compare the limit distributors of the three types of estimators. It is shown that both isotonization and smoothing lead to a more concentrated limit distribution and we study the dependence on the proportionality constant in the bandwidth. We also show that isotonization does not change the limit behavior of a kernel estimator with a larger bandwidth, in the case that the density is known to have more than one derivative.
Resumo:
In this work, we present a multichannel EEG decomposition model based on an adaptive topographic time-frequency approximation technique. It is an extension of the Matching Pursuit algorithm and called dependency multichannel matching pursuit (DMMP). It takes the physiologically explainable and statistically observable topographic dependencies between the channels into account, namely the spatial smoothness of neighboring electrodes that is implied by the electric leadfield. DMMP decomposes a multichannel signal as a weighted sum of atoms from a given dictionary where the single channels are represented from exactly the same subset of a complete dictionary. The decomposition is illustrated on topographical EEG data during different physiological conditions using a complete Gabor dictionary. Further the extension of the single-channel time-frequency distribution to a multichannel time-frequency distribution is given. This can be used for the visualization of the decomposition structure of multichannel EEG. A clustering procedure applied to the topographies, the vectors of the corresponding contribution of an atom to the signal in each channel produced by DMMP, leads to an extremely sparse topographic decomposition of the EEG.
Resumo:
In environmental epidemiology, exposure X and health outcome Y vary in space and time. We present a method to diagnose the possible influence of unmeasured confounders U on the estimated effect of X on Y and to propose several approaches to robust estimation. The idea is to use space and time as proxy measures for the unmeasured factors U. We start with the time series case where X and Y are continuous variables at equally-spaced times and assume a linear model. We define matching estimator b(u)s that correspond to pairs of observations with specific lag u. Controlling for a smooth function of time, St, using a kernel estimator is roughly equivalent to estimating the association with a linear combination of the b(u)s with weights that involve two components: the assumptions about the smoothness of St and the normalized variogram of the X process. When an unmeasured confounder U exists, but the model otherwise correctly controls for measured confounders, the excess variation in b(u)s is evidence of confounding by U. We use the plot of b(u)s versus lag u, lagged-estimator-plot (LEP), to diagnose the influence of U on the effect of X on Y. We use appropriate linear combination of b(u)s or extrapolate to b(0) to obtain novel estimators that are more robust to the influence of smooth U. The methods are extended to time series log-linear models and to spatial analyses. The LEP plot gives us a direct view of the magnitude of the estimators for each lag u and provides evidence when models did not adequately describe the data.