979 resultados para variational mean-field method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-alcoholic fatty liver disease (NAFLD) is an emerging health concern in both developed and non-developed world, encompassing from simple steatosis to non-alcoholic steatohepatitis (NASH), cirrhosis and liver cancer. Incidence and prevalence of this disease are increasing due to the socioeconomic transition and change to harmful diet. Currently, gold standard method in NAFLD diagnosis is liver biopsy, despite complications and lack of accuracy due to sampling error. Further, pathogenesis of NAFLD is not fully understood, but is well-known that obesity, diabetes and metabolic derangements played a major role in disease development and progression. Besides, gut microbioma and host genetic and epigenetic background could explain considerable interindividual variability. Knowledge that epigenetics, heritable events not caused by changes in DNA sequence, contribute to development of diseases has been a revolution in the last few years. Recently, evidences are accumulating revealing the important role of epigenetics in NAFLD pathogenesis and in NASH genesis. Histone modifications, changes in DNA methylation and aberrant profiles or microRNAs could boost development of NAFLD and transition into clinical relevant status. PNPLA3 genotype GG has been associated with a more progressive disease and epigenetics could modulate this effect. The impact of epigenetic on NAFLD progression could deserve further applications on therapeutic targets together with future non-invasive methods useful for the diagnosis and staging of NAFLD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is almost not a case in exploration geology, where the studied data doesn’tincludes below detection limits and/or zero values, and since most of the geological dataresponds to lognormal distributions, these “zero data” represent a mathematicalchallenge for the interpretation.We need to start by recognizing that there are zero values in geology. For example theamount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-existswith nepheline. Another common essential zero is a North azimuth, however we canalways change that zero for the value of 360°. These are known as “Essential zeros”, butwhat can we do with “Rounded zeros” that are the result of below the detection limit ofthe equipment?Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimeswe need to differentiate between a sodic and a potassic alteration. Pre-classification intogroups requires a good knowledge of the distribution of the data and the geochemicalcharacteristics of the groups which is not always available. Considering the zero valuesequal to the limit of detection of the used equipment will generate spuriousdistributions, especially in ternary diagrams. Same situation will occur if we replace thezero values by a small amount using non-parametric or parametric techniques(imputation).The method that we are proposing takes into consideration the well known relationshipsbetween some elements. For example, in copper porphyry deposits, there is always agood direct correlation between the copper values and the molybdenum ones, but whilecopper will always be above the limit of detection, many of the molybdenum values willbe “rounded zeros”. So, we will take the lower quartile of the real molybdenum valuesand establish a regression equation with copper, and then we will estimate the“rounded” zero values of molybdenum by their corresponding copper values.The method could be applied to any type of data, provided we establish first theircorrelation dependency.One of the main advantages of this method is that we do not obtain a fixed value for the“rounded zeros”, but one that depends on the value of the other variable.Key words: compositional data analysis, treatment of zeros, essential zeros, roundedzeros, correlation dependency

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a new paradigm to carry outthe registration task with a dense deformation fieldderived from the optical flow model and the activecontour method. The proposed framework merges differenttasks such as segmentation, regularization, incorporationof prior knowledge and registration into a singleframework. The active contour model is at the core of ourframework even if it is used in a different way than thestandard approaches. Indeed, active contours are awell-known technique for image segmentation. Thistechnique consists in finding the curve which minimizesan energy functional designed to be minimal when thecurve has reached the object contours. That way, we getaccurate and smooth segmentation results. So far, theactive contour model has been used to segment objectslying in images from boundary-based, region-based orshape-based information. Our registration technique willprofit of all these families of active contours todetermine a dense deformation field defined on the wholeimage. A well-suited application of our model is theatlas registration in medical imaging which consists inautomatically delineating anatomical structures. Wepresent results on 2D synthetic images to show theperformances of our non rigid deformation field based ona natural registration term. We also present registrationresults on real 3D medical data with a large spaceoccupying tumor substantially deforming surroundingstructures, which constitutes a high challenging problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diagnosis of several neurological disorders is based on the detection of typical pathological patterns in the electroencephalogram (EEG). This is a time-consuming task requiring significant training and experience. Automatic detection of these EEG patterns would greatly assist in quantitative analysis and interpretation. We present a method, which allows automatic detection of epileptiform events and discrimination of them from eye blinks, and is based on features derived using a novel application of independent component analysis. The algorithm was trained and cross validated using seven EEGs with epileptiform activity. For epileptiform events with compensation for eyeblinks, the sensitivity was 65 +/- 22% at a specificity of 86 +/- 7% (mean +/- SD). With feature extraction by PCA or classification of raw data, specificity reduced to 76 and 74%, respectively, for the same sensitivity. On exactly the same data, the commercially available software Reveal had a maximum sensitivity of 30% and concurrent specificity of 77%. Our algorithm performed well at detecting epileptiform events in this preliminary test and offers a flexible tool that is intended to be generalized to the simultaneous classification of many waveforms in the EEG.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Laparoscopic techniques have been proposed as an alternative to open surgery for therapy of peptic ulcer perforation. They provide better postoperative comfort and absence of parietal complications, but leakage occurs in 5% of cases. We describe a new method combining laparoscopy and endoluminal endoscopy, designed to ensure complete closure of the perforation. METHODS: Six patients with anterior ulcer perforations (4 duodenal, 2 gastric) underwent a concomitant laparoscopy and endoluminal endoscopy with closure of the orifice by an omental plug attracted into the digestive tract. RESULTS: All perforations were sealed. The mean operating time was 72 minutes. The mean hospital stay was 5.5 days. There was no morbidity and no mortality. At the 30-day evaluation all ulcers but one (due to Helicobacter pylori persistence) were healed. CONCLUSIONS: This method is safe and effective. Its advantages compared with open surgery or laparoscopic patching as well as its cost-effectiveness should be studied in prospective randomized trials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An analytic method to evaluate nuclear contributions to electrical properties of polyatomic molecules is presented. Such contributions control changes induced by an electric field on equilibrium geometry (nuclear relaxation contribution) and vibrational motion (vibrational contribution) of a molecular system. Expressions to compute the nuclear contributions have been derived from a power series expansion of the potential energy. These contributions to the electrical properties are given in terms of energy derivatives with respect to normal coordinates, electric field intensity or both. Only one calculation of such derivatives at the field-free equilibrium geometry is required. To show the useful efficiency of the analytical evaluation of electrical properties (the so-called AEEP method), results for calculations on water and pyridine at the SCF/TZ2P and the MP2/TZ2P levels of theory are reported. The results obtained are compared with previous theoretical calculations and with experimental values

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A select-divide-and-conquer variational method to approximate configuration interaction (CI) is presented. Given an orthonormal set made up of occupied orbitals (Hartree-Fock or similar) and suitable correlation orbitals (natural or localized orbitals), a large N-electron target space S is split into subspaces S0,S1,S2,...,SR. S0, of dimension d0, contains all configurations K with attributes (energy contributions, etc.) above thresholds T0={T0egy, T0etc.}; the CI coefficients in S0 remain always free to vary. S1 accommodates KS with attributes above T1≤T0. An eigenproblem of dimension d0+d1 for S0+S 1 is solved first, after which the last d1 rows and columns are contracted into a single row and column, thus freezing the last d1 CI coefficients hereinafter. The process is repeated with successive Sj(j≥2) chosen so that corresponding CI matrices fit random access memory (RAM). Davidson's eigensolver is used R times. The final energy eigenvalue (lowest or excited one) is always above the corresponding exact eigenvalue in S. Threshold values {Tj;j=0, 1, 2,...,R} regulate accuracy; for large-dimensional S, high accuracy requires S 0+S1 to be solved outside RAM. From there on, however, usually a few Davidson iterations in RAM are needed for each step, so that Hamiltonian matrix-element evaluation becomes rate determining. One μhartree accuracy is achieved for an eigenproblem of order 24 × 106, involving 1.2 × 1012 nonzero matrix elements, and 8.4×109 Slater determinants

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An analytical set of field-induced coordinates is defined and is used to show that the vibrational degrees of freedom required to completely describe nuclear relaxation polarizabilities and hyperpolarizabilities is reduced from 3N-6 to a relatively small number. As this number does not depend upon the size of the molecule, the process provides computational advantages. A method is provided to separate anharmonic contributions from harmonic contributions as well as effective mechanical from electrical anharmonicity. The procedures are illustrated by Hartree-Fock calculations, indicating that anharmonicity can be very important

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have implemented our new procedure for computing Franck-Condon factors utilizing vibrational configuration interaction based on a vibrational self-consistent field reference. Both Duschinsky rotations and anharmonic three-mode coupling are taken into account. Simulations of the first ionization band of Cl O2 and C4 H4 O (furan) using up to quadruple excitations in treating anharmonicity are reported and analyzed. A developer version of the MIDASCPP code was employed to obtain the required anharmonic vibrational integrals and transition frequencies

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vibrational configuration interaction method used to obtain static vibrational (hyper)polarizabilities is extended to dynamic nonlinear optical properties in the infinite optical frequency approximation. Illustrative calculations are carried out on H2 O and N H3. The former molecule is weakly anharmonic while the latter contains a strongly anharmonic umbrella mode. The effect on vibrational (hyper)polarizabilities due to various truncations of the potential energy and property surfaces involved in the calculation are examined

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two common methods of accounting for electric-field-induced perturbations to molecular vibration are analyzed and compared. The first method is based on a perturbation-theoretic treatment and the second on a finite-field treatment. The relationship between the two, which is not immediately apparent, is made by developing an algebraic formalism for the latter. Some of the higher-order terms in this development are documented here for the first time. As well as considering vibrational dipole polarizabilities and hyperpolarizabilities, we also make mention of the vibrational Stark effec

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A variational method for Hamiltonian systems is analyzed. Two different variationalcharacterization for the frequency of nonlinear oscillations is also suppliedfor non-Hamiltonian systems

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rockfall propagation areas can be determined using a simple geometric rule known as shadow angle or energy line method based on a simple Coulomb frictional model implemented in the CONEFALL computer program. Runout zones are estimated from a digital terrain model (DTM) and a grid file containing the cells representing rockfall potential source areas. The cells of the DTM that are lowest in altitude and located within a cone centered on a rockfall source cell belong to the potential propagation area associated with that grid cell. In addition, the CONEFALL method allows estimation of mean and maximum velocities and energies of blocks in the rockfall propagation areas. Previous studies indicate that the slope angle cone ranges from 27° to 37° depending on the assumptions made, i.e. slope morphology, probability of reaching a point, maximum run-out, field observations. Different solutions based on previous work and an example of an actual rockfall event are presented here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIM: To prospectively study the intraocular pressure (IOP) lowering effect and safety of the new method of very deep sclerectomy with collagen implant (VDSCI) compared with standard deep sclerectomy with collagen implant (DSCI). METHODS: The trial involved 50 eyes of 48 patients with medically uncontrolled primary and secondary open-angle glaucoma, randomized to undergo either VDSCI procedure (25 eyes) or DSCI procedure (25 eyes). Follow-up examinations were performed before surgery and after surgery at day 1, at week 1, at months 1, 2, 3, 6, 9, 12, 18, and 24 months. Ultrasound biomicroscopy was performed at 3 and 12 months. RESULTS: Mean follow-up period was 18.6+/-5.9 (VDSCI) and 18.9+/-3.6 (DSCI) months (P=NS). Mean preoperative IOP was 22.4+/-7.4 mm Hg for VDSCI and 20.4+/-4.4 mm Hg for DSCI eyes (P=NS). Mean postoperative IOP was 3.9+/-2.3 (VDSCI) and 6.3+/-4.3 (DSCI) (P<0.05) at day 1, and 12.2+/-3.9 (VDSCI) and 13.3+/-3.4 (DSCI) (P=NS) at month 24. At the last visit, the complete success rate (defined as an IOP of < or =18 mm Hg and a percentage drop of at least 20%, achieved without medication) was 57% in VDSCI and 62% in DSCI eyes (P=NS) ultrasound biomicroscopy at 12 months showed a mean volume of the subconjunctival filtering bleb of 3.9+/-4.2 mm3 (VDSCI) and 6.8+/-7.5 mm3 (DSCI) (P=0.426) and 5.2+/-3.6 mm3 (VDSCI) and 5.4+/-2.9 mm3 (DSCI) (P=0.902) for the intrascleral space. CONCLUSIONS: Very deep sclerectomy seems to provide stable and good control of IOP at 2 years of follow-up with few postoperative complications similar to standard deep sclerectomy with the collagen implant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We establish the validity of subsampling confidence intervals for themean of a dependent series with heavy-tailed marginal distributions.Using point process theory, we study both linear and nonlinear GARCH-liketime series models. We propose a data-dependent method for the optimalblock size selection and investigate its performance by means of asimulation study.