12 resultados para Prescribed mean-curvature problem
em Aston University Research Archive
Resumo:
A formalism for describing the dynamics of Genetic Algorithms (GAs) using method s from statistical mechanics is applied to the problem of generalization in a perceptron with binary weights. The dynamics are solved for the case where a new batch of training patterns is presented to each population member each generation, which considerably simplifies the calculation. The theory is shown to agree closely to simulations of a real GA averaged over many runs, accurately predicting the mean best solution found. For weak selection and large problem size the difference equations describing the dynamics can be expressed analytically and we find that the effects of noise due to the finite size of each training batch can be removed by increasing the population size appropriately. If this population resizing is used, one can deduce the most computationally efficient size of training batch each generation. For independent patterns this choice also gives the minimum total number of training patterns used. Although using independent patterns is a very inefficient use of training patterns in general, this work may also prove useful for determining the optimum batch size in the case where patterns are recycled.
Resumo:
A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.
Resumo:
Measurements (autokeratometry, A-scan ultrasonography and video ophthalmophakometry) of ocular surface radii, axial separations and alignment were made in the horizontal meridian of nine emmetropes (aged 20-38 years) with relaxed (cycloplegia) and active accommodation (mean ± 95% confidence interval: 3.7 ± 1.1 D). The anterior chamber depth (-1.5 ± 0.3 D) and both crystalline lens surfaces (front 3.1 ± 0.8 D; rear 2.1 ± 0.6 D) contributed to dioptric vergence changes that accompany accommodation. Accommodation did not alter ocular surface alignment. Ocular misalignment in relaxed eyes is mainly because of eye rotation (5.7 ± 1.6° temporally) with small amounts of lens tilt (0.2 ± 0.8° temporally) and decentration (0.1 ± 0.1 mm nasally) but these results must be viewed with caution as we did not account for corneal asymmetry. Comparison of calculated and empirically derived coefficients (upon which ocular surface alignment calculations depend) revealed that negligible inherent errors arose from neglect of ocular surface asphericity, lens gradient refractive index properties, surface astigmatism, effects of pupil size and centration, assumed eye rotation axis position and use of linear equations for analysing Purkinje image shifts. © 2004 The College of Optometrists.
Resumo:
Purpose. The purpose of this study was to investigate the influence of corneal topography and thickness on intraocular pressure (IOP) and pulse amplitude (PA) as measured using the Ocular Blood Flow Analyzer (OBFA) pneumatonometer (Paradigm Medical Industries, Utah, USA). Methods. 47 university students volunteered for this cross-sectional study: mean age 20.4 yrs, range 18 to 28 yrs; 23 male, 24 female. Only the measurements from the right eye of each participant were used. Central corneal thickness and mean corneal radius were measured using Scheimpflug biometry and corneal topographic imaging respectively. IOP and PA measurements were made with the OBFA pneumatonometer. Axial length was measured using A-scan ultrasound, due to its known correlation with these corneal parameters. Stepwise multiple regression analysis was used to identify those components that contributed significant variance to the independent variables of IOP and PA. Results. The mean IOP and PA measurements were 13.1 (SD 3.3) mmHg and 3.0 (SD 1.2) mmHg respectively. IOP measurements made with the OBFA pneumatonometer correlated significantly with central corneal thickness (r = +0.374, p = 0.010), such that a 10 mm change in CCT was equivalent to a 0.30 mmHg change in measured IOP. PA measurements correlated significantly with axial length (part correlate = -0.651, p < 0.001) and mean corneal radius (part correlate = +0.459, p < 0.001) but not corneal thickness. Conclusions. IOP measurements taken with the OBFA pneumatonometer are correlated with corneal thickness, but not axial length or corneal curvature. Conversely, PA measurements are unaffected by corneal thickness, but correlated with axial length and corneal radius. These parameters should be taken into consideration when interpreting IOP and PA measurements made with the OBFA pneumatonometer.
Resumo:
Public values are moving from a research concern to policy discourse and management practice. There are, though, different readings of what public values actually mean. Reflection suggests two distinct strands of thinking: a generative strand that sees public value emerging from processes of public debate; and an institutional interpretation that views public values as the attributes of government producers. Neither perspective seems to offer a persuasive account of how the public gains from strengthened public values. Key propositions on values are generated from comparison of influential texts. A provisional framework is presented of the values base of public institutions and the loosely coupled public propositions flowing from these values. Value propositions issue from different governing contexts, which are grouped into policy frames that then compete with other problem frames for citizens’ cognitive resources. Vital democratic commitments to pluralism require public values to be distributed in competition with other, respected, frames.
Resumo:
The concept of plagiarism is not uncommonly associated with the concept of intellectual property, both for historical and legal reasons: the approach to the ownership of ‘moral’, nonmaterial goods has evolved to the right to individual property, and consequently a need was raised to establish a legal framework to cope with the infringement of those rights. The solution to plagiarism therefore falls most often under two categories: ethical and legal. On the ethical side, education and intercultural studies have addressed plagiarism critically, not only as a means to improve academic ethics policies (PlagiarismAdvice.org, 2008), but mainly to demonstrate that if anything the concept of plagiarism is far from being universal (Howard & Robillard, 2008). Even if differently, Howard (1995) and Scollon (1994, 1995) argued, and Angèlil-Carter (2000) and Pecorari (2008) later emphasised that the concept of plagiarism cannot be studied on the grounds that one definition is clearly understandable by everyone. Scollon (1994, 1995), for example, claimed that authorship attribution is particularly a problem in non-native writing in English, and so did Pecorari (2008) in her comprehensive analysis of academic plagiarism. If among higher education students plagiarism is often a problem of literacy, with prior, conflicting social discourses that may interfere with academic discourse, as Angèlil-Carter (2000) demonstrates, we then have to aver that a distinction should be made between intentional and inadvertent plagiarism: plagiarism should be prosecuted when intentional, but if it is part of the learning process and results from the plagiarist’s unfamiliarity with the text or topic it should be considered ‘positive plagiarism’ (Howard, 1995: 796) and hence not an offense. Determining the intention behind the instances of plagiarism therefore determines the nature of the disciplinary action adopted. Unfortunately, in order to demonstrate the intention to deceive and charge students with accusations of plagiarism, teachers necessarily have to position themselves as ‘plagiarism police’, although it has been argued otherwise (Robillard, 2008). Practice demonstrates that in their daily activities teachers will find themselves being required a command of investigative skills and tools that they most often lack. We thus claim that the ‘intention to deceive’ cannot inevitably be dissociated from plagiarism as a legal issue, even if Garner (2009) asserts that generally plagiarism is immoral but not illegal, and Goldstein (2003) makes the same severance. However, these claims, and the claim that only cases of copyright infringement tend to go to court, have recently been challenged, mainly by forensic linguists, who have been actively involved in cases of plagiarism. Turell (2008), for instance, demonstrated that plagiarism is often connoted with an illegal appropriation of ideas. Previously, she (Turell, 2004) had demonstrated by comparison of four translations of Shakespeare’s Julius Caesar to Spanish that the use of linguistic evidence is able to demonstrate instances of plagiarism. This challenge is also reinforced by practice in international organisations, such as the IEEE, to whom plagiarism potentially has ‘severe ethical and legal consequences’ (IEEE, 2006: 57). What plagiarism definitions used by publishers and organisations have in common – and which the academia usually lacks – is their focus on the legal nature. We speculate that this is due to the relation they intentionally establish with copyright laws, whereas in education the focus tends to shift from the legal to the ethical aspects. However, the number of plagiarism cases taken to court is very small, and jurisprudence is still being developed on the topic. In countries within the Civil Law tradition, Turell (2008) claims, (forensic) linguists are seldom called upon as expert witnesses in cases of plagiarism, either because plagiarists are rarely taken to court or because there is little tradition of accepting linguistic evidence. In spite of the investigative and evidential potential of forensic linguistics to demonstrate the plagiarist’s intention or otherwise, this potential is restricted by the ability to identify a text as being suspect of plagiarism. In an era with such a massive textual production, ‘policing’ plagiarism thus becomes an extraordinarily difficult task without the assistance of plagiarism detection systems. Although plagiarism detection has attracted the attention of computer engineers and software developers for years, a lot of research is still needed. Given the investigative nature of academic plagiarism, plagiarism detection has of necessity to consider not only concepts of education and computational linguistics, but also forensic linguistics. Especially, if intended to counter claims of being a ‘simplistic response’ (Robillard & Howard, 2008). In this paper, we use a corpus of essays written by university students who were accused of plagiarism, to demonstrate that a forensic linguistic analysis of improper paraphrasing in suspect texts has the potential to identify and provide evidence of intention. A linguistic analysis of the corpus texts shows that the plagiarist acts on the paradigmatic axis to replace relevant lexical items with a related word from the same semantic field, i.e. a synonym, a subordinate, a superordinate, etc. In other words, relevant lexical items were replaced with related, but not identical, ones. Additionally, the analysis demonstrates that the word order is often changed intentionally to disguise the borrowing. On the other hand, the linguistic analysis of linking and explanatory verbs (i.e. referencing verbs) and prepositions shows that these have the potential to discriminate instances of ‘patchwriting’ and instances of plagiarism. This research demonstrates that the referencing verbs are borrowed from the original in an attempt to construct the new text cohesively when the plagiarism is inadvertent, and that the plagiarist has made an effort to prevent the reader from identifying the text as plagiarism, when it is intentional. In some of these cases, the referencing elements prove being able to identify direct quotations and thus ‘betray’ and denounce plagiarism. Finally, we demonstrate that a forensic linguistic analysis of these verbs is critical to allow detection software to identify them as proper paraphrasing and not – mistakenly and simplistically – as plagiarism.
Resumo:
We investigate an application of the method of fundamental solutions (MFS) to the one-dimensional parabolic inverse Cauchy–Stefan problem, where boundary data and the initial condition are to be determined from the Cauchy data prescribed on a given moving interface. In [B.T. Johansson, D. Lesnic, and T. Reeve, A method of fundamental solutions for the one-dimensional inverse Stefan Problem, Appl. Math Model. 35 (2011), pp. 4367–4378], the inverse Stefan problem was considered, where only the boundary data is to be reconstructed on the fixed boundary. We extend the MFS proposed in Johansson et al. (2011) and show that the initial condition can also be simultaneously recovered, i.e. the MFS is appropriate for the inverse Cauchy-Stefan problem. Theoretical properties of the method, as well as numerical investigations, are included, showing that accurate results can be efficiently obtained with small computational cost.
Resumo:
We propose two algorithms involving the relaxation of either the given Dirichlet data or the prescribed Neumann data on the over-specified boundary, in the case of the alternating iterative algorithm of ` 12 ` 12 `$12 `&12 `#12 `^12 `_12 `%12 `~12 *Kozlov91 applied to Cauchy problems for the modified Helmholtz equation. A convergence proof of these relaxation methods is given, along with a stopping criterion. The numerical results obtained using these procedures, in conjunction with the boundary element method (BEM), show the numerical stability, convergence, consistency and computational efficiency of the proposed methods.
Resumo:
A Cauchy problem for general elliptic second-order linear partial differential equations in which the Dirichlet data in H½(?1 ? ?3) is assumed available on a larger part of the boundary ? of the bounded domain O than the boundary portion ?1 on which the Neumann data is prescribed, is investigated using a conjugate gradient method. We obtain an approximation to the solution of the Cauchy problem by minimizing a certain discrete functional and interpolating using the finite diference or boundary element method. The minimization involves solving equations obtained by discretising mixed boundary value problems for the same operator and its adjoint. It is proved that the solution of the discretised optimization problem converges to the continuous one, as the mesh size tends to zero. Numerical results are presented and discussed.
Resumo:
We propose two algorithms involving the relaxation of either the given Dirichlet data (boundary displacements) or the prescribed Neumann data (boundary tractions) on the over-specified boundary in the case of the alternating iterative algorithm of Kozlov et al. [16] applied to Cauchy problems in linear elasticity. A convergence proof of these relaxation methods is given, along with a stopping criterion. The numerical results obtained using these procedures, in conjunction with the boundary element method (BEM), show the numerical stability, convergence, consistency and computational efficiency of the proposed method.
Resumo:
The dynamics of the non-equilibrium Ising model with parallel updates is investigated using a generalized mean field approximation that incorporates multiple two-site correlations at any two time steps, which can be obtained recursively. The proposed method shows significant improvement in predicting local system properties compared to other mean field approximation techniques, particularly in systems with symmetric interactions. Results are also evaluated against those obtained from Monte Carlo simulations. The method is also employed to obtain parameter values for the kinetic inverse Ising modeling problem, where couplings and local field values of a fully connected spin system are inferred from data. © 2014 IOP Publishing Ltd and SISSA Medialab srl.
Resumo:
Background: Laparoscopic greater curvature plication (LGCP) is an emerging bariatric procedure that reduces the gastric volume without implantable devices or gastrectomy. The aim of this study was to explore changes in glucose homeostasis, postprandial triglyceridemia, and meal-stimulated secretion of selected gut hormones [glucose-dependent insulinotropic polypeptide (GIP), glucagon-like peptide-1 (GLP-1), ghrelin, and obestatin] in patients with type 2 diabetes mellitus (T2DM) at 1 and 6 months after the procedure. Methods: Thirteen morbidly obese T2DM women (mean age, 53.2 ± 8.76 years; body mass index, 40.1 ± 4.59 kg/m2) were prospectively investigated before the LGCP and at 1- and 6-month follow-up. At these time points, all study patients underwent a standardized liquid mixed-meal test, and blood was sampled for assessment of plasma levels of glucose, insulin, C-peptide, triglycerides, GIP, GLP-1, ghrelin, and obestatin. Results: All patients had significant weight loss both at 1 and 6 months after the LGCP (p≤0.002), with mean percent excess weight loss (%EWL) reaching 29.7 ;plusmn2.9 % at the 6-month follow-up. Fasting hyperglycemia and hyperinsulinemia improved significantly at 6 months after the LGCP (p<0.05), with parallel improvement in insulin sensitivity and HbA1c levels (p<0.0001). Meal-induced glucose plasma levels were significantly lower at 6 months after the LGCP (p<0.0001), and postprandial triglyceridemia was also ameliorated at the 6-month follow-up (p<0.001). Postprandial GIP plasma levels were significantly increased both at 1 and 6 months after the LGCP (p<0.0001), whereas the overall meal-induced GLP-1 response was not significantly changed after the procedure (p ;gt0.05). Postprandial ghrelin plasma levels decreased at 1 and 6 months after the LGCP (p<0.0001) with no significant changes in circulating obestatin levels. Conclusion: During the initial 6-month postoperative period, LGCP induces significant weight loss and improves the metabolic profile of morbidly obese T2DM patients, while it also decreases circulating postprandial ghrelin levels and increases the meal-induced GIP response. © 2013 Springer Science+Business Media New York.