972 resultados para Ill-posed problem


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Magnetoencephalography (MEG) can be used to reconstruct neuronal activity with high spatial and temporal resolution. However, this reconstruction problem is ill-posed, and requires the use of prior constraints in order to produce a unique solution. At present there are a multitude of inversion algorithms, each employing different assumptions, but one major problem when comparing the accuracy of these different approaches is that often the true underlying electrical state of the brain is unknown. In this study, we explore one paradigm, retinotopic mapping in the primary visual cortex (V1), for which the ground truth is known to a reasonable degree of accuracy, enabling the comparison of MEG source reconstructions with the true electrical state of the brain. Specifically, we attempted to localize, using a beanforming method, the induced responses in the visual cortex generated by a high contrast, retinotopically varying stimulus. Although well described in primate studies, it has been an open question whether the induced gamma power in humans due to high contrast gratings derives from V1 rather than the prestriate cortex (V2). We show that the beanformer source estimate in the gamma and theta bands does vary in a manner consistent with the known retinotopy of V1. However, these peak locations, although retinotopically organized, did not accurately localize to the cortical surface. We considered possible causes for this discrepancy and suggest that improved MEG/magnetic resonance imaging co-registration and the use of more accurate source models that take into account the spatial extent and shape of the active cortex may, in future, improve the accuracy of the source reconstructions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The inverse problem of determining a spacewise dependent heat source, together with the initial temperature for the parabolic heat equation, using the usual conditions of the direct problem and information from two supplementary temperature measurements at different instants of time is studied. These spacewise dependent temperature measurements ensure that this inverse problem has a unique solution, despite the solution being unstable, hence the problem is ill-posed. We propose an iterative algorithm for the stable reconstruction of both the initial data and the source based on a sequence of well-posed direct problems for the parabolic heat equation, which are solved at each iteration step using the boundary element method. The instability is overcome by stopping the iterations at the first iteration for which the discrepancy principle is satisfied. Numerical results are presented for a typical benchmark test example, which has the input measured data perturbed by increasing amounts of random noise. The numerical results show that the proposed procedure gives accurate numerical approximations in relatively few iterations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The inverse problem of determining a spacewise-dependent heat source for the parabolic heat equation using the usual conditions of the direct problem and information from one supplementary temperature measurement at a given instant of time is studied. This spacewise-dependent temperature measurement ensures that this inverse problem has a unique solution, but the solution is unstable and hence the problem is ill-posed. We propose a variational conjugate gradient-type iterative algorithm for the stable reconstruction of the heat source based on a sequence of well-posed direct problems for the parabolic heat equation which are solved at each iteration step using the boundary element method. The instability is overcome by stopping the iterative procedure at the first iteration for which the discrepancy principle is satisfied. Numerical results are presented which have the input measured data perturbed by increasing amounts of random noise. The numerical results show that the proposed procedure yields stable and accurate numerical approximations after only a few iterations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper investigates the inverse problem of determining a spacewise dependent heat source in the parabolic heat equation using the usual conditions of the direct problem and information from a supplementary temperature measurement at a given single instant of time. The spacewise dependent temperature measurement ensures that the inverse problem has a unique solution, but this solution is unstable, hence the problem is ill-posed. For this inverse problem, we propose an iterative algorithm based on a sequence of well-posed direct problems which are solved at each iteration step using the boundary element method (BEM). The instability is overcome by stopping the iterations at the first iteration for which the discrepancy principle is satisfied. Numerical results are presented for various typical benchmark test examples which have the input measured data perturbed by increasing amounts of random noise.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article reports on an investigationwith first year undergraduate ProductDesign and Management students within a School of Engineering and Applied Science. The students at the time of this investigation had studied fundamental engineering science and mathematics for one semester. The students were given an open ended, ill-formed problem which involved designing a simple bridge to cross a river.They were given a talk on problemsolving and given a rubric to follow, if they chose to do so.They were not given any formulae or procedures needed in order to resolve the problem. In theory, they possessed the knowledge to ask the right questions in order tomake assumptions but, in practice, it turned out they were unable to link their a priori knowledge to resolve this problem. They were able to solve simple beam problems when given closed questions. The results show they were unable to visualize a simple bridge as an augmented beam problem and ask pertinent questions and hence formulate appropriate assumptions in order to offer resolutions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Цветан Д. Христов, Недю Ив. Попиванов, Манфред Шнайдер - Изучени са някои тримерни гранични задачи за уравнения от смесен тип. За уравнения от типа на Трикоми те са формулирани от М. Протер през 1952, като тримерни аналози на задачите на Дарбу или Коши–Гурса в равнината. Добре известно е, че новите задачи са некоректни. Ние формулираме нова гранична задача за уравнения от типа на Келдиш и даваме понятие за квазиругулярно решение на тази задача и на eдна от задачите на Протер. Намерени са достатъчни условия за единственост на такива решения.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the most pressing demands on electrophysiology applied to the diagnosis of epilepsy is the non-invasive localization of the neuronal generators responsible for brain electrical and magnetic fields (the so-called inverse problem). These neuronal generators produce primary currents in the brain, which together with passive currents give rise to the EEG signal. Unfortunately, the signal we measure on the scalp surface doesn't directly indicate the location of the active neuronal assemblies. This is the expression of the ambiguity of the underlying static electromagnetic inverse problem, partly due to the relatively limited number of independent measures available. A given electric potential distribution recorded at the scalp can be explained by the activity of infinite different configurations of intracranial sources. In contrast, the forward problem, which consists of computing the potential field at the scalp from known source locations and strengths with known geometry and conductivity properties of the brain and its layers (CSF/meninges, skin and skull), i.e. the head model, has a unique solution. The head models vary from the computationally simpler spherical models (three or four concentric spheres) to the realistic models based on the segmentation of anatomical images obtained using magnetic resonance imaging (MRI). Realistic models – computationally intensive and difficult to implement – can separate different tissues of the head and account for the convoluted geometry of the brain and the significant inter-individual variability. In real-life applications, if the assumptions of the statistical, anatomical or functional properties of the signal and the volume in which it is generated are meaningful, a true three-dimensional tomographic representation of sources of brain electrical activity is possible in spite of the ‘ill-posed’ nature of the inverse problem (Michel et al., 2004). The techniques used to achieve this are now referred to as electrical source imaging (ESI) or magnetic source imaging (MSI). The first issue to influence reconstruction accuracy is spatial sampling, i.e. the number of EEG electrodes. It has been shown that this relationship is not linear, reaching a plateau at about 128 electrodes, provided spatial distribution is uniform. The second factor is related to the different properties of the source localization strategies used with respect to the hypothesized source configuration.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper reports on an investigation with first year undergraduate Product Design and Management students within a School of Engineering. The students at the time of this investigation had studied fundamental engineering science and mathematics for one semester. The students were given an open ended, ill formed problem which involved designing a simple bridge to cross a river. They were given a talk on problem solving and given a rubric to follow, if they chose to do so. They were not given any formulae or procedures needed in order to resolve the problem. In theory, they possessed the knowledge to ask the right questions in order to make assumptions but, in practice, it turned out they were unable to link their a priori knowledge to resolve this problem. They were able to solve simple beam problems when given closed questions. The results show they were unable to visualise a simple bridge as an augmented beam problem and ask pertinent questions and hence formulate appropriate assumptions in order to offer resolutions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We propose a mathematically well-founded approach for locating the source (initial state) of density functions evolved within a nonlinear reaction-diffusion model. The reconstruction of the initial source is an ill-posed inverse problem since the solution is highly unstable with respect to measurement noise. To address this instability problem, we introduce a regularization procedure based on the nonlinear Landweber method for the stable determination of the source location. This amounts to solving a sequence of well-posed forward reaction-diffusion problems. The developed framework is general, and as a special instance we consider the problem of source localization of brain tumors. We show numerically that the source of the initial densities of tumor cells are reconstructed well on both imaging data consisting of simple and complex geometric structures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dynamics of biomolecules over various spatial and time scales are essential for biological functions such as molecular recognition, catalysis and signaling. However, reconstruction of biomolecular dynamics from experimental observables requires the determination of a conformational probability distribution. Unfortunately, these distributions cannot be fully constrained by the limited information from experiments, making the problem an ill-posed one in the terminology of Hadamard. The ill-posed nature of the problem comes from the fact that it has no unique solution. Multiple or even an infinite number of solutions may exist. To avoid the ill-posed nature, the problem needs to be regularized by making assumptions, which inevitably introduce biases into the result.

Here, I present two continuous probability density function approaches to solve an important inverse problem called the RDC trigonometric moment problem. By focusing on interdomain orientations we reduced the problem to determination of a distribution on the 3D rotational space from residual dipolar couplings (RDCs). We derived an analytical equation that relates alignment tensors of adjacent domains, which serves as the foundation of the two methods. In the first approach, the ill-posed nature of the problem was avoided by introducing a continuous distribution model, which enjoys a smoothness assumption. To find the optimal solution for the distribution, we also designed an efficient branch-and-bound algorithm that exploits the mathematical structure of the analytical solutions. The algorithm is guaranteed to find the distribution that best satisfies the analytical relationship. We observed good performance of the method when tested under various levels of experimental noise and when applied to two protein systems. The second approach avoids the use of any model by employing maximum entropy principles. This 'model-free' approach delivers the least biased result which presents our state of knowledge. In this approach, the solution is an exponential function of Lagrange multipliers. To determine the multipliers, a convex objective function is constructed. Consequently, the maximum entropy solution can be found easily by gradient descent methods. Both algorithms can be applied to biomolecular RDC data in general, including data from RNA and DNA molecules.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main purpose of this study is to present an alternative benchmarking approach that can be used by national regulators of utilities. It is widely known that the lack of sizeable data sets limits the choice of the benchmarking method and the specification of the model to set price controls within incentive-based regulation. Ill-posed frontier models are the problem that some national regulators have been facing. Maximum entropy estimators are useful in the estimation of such ill-posed models, in particular in models exhibiting small sample sizes, collinearity and non-normal errors, as well as in models where the number of parameters to be estimated exceeds the number of observations available. The empirical study involves a sample data used by the Portuguese regulator of the electricity sector to set the parameters for the electricity distribution companies in the regulatory period of 2012-2014. DEA and maximum entropy methods are applied and the efficiency results are compared.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Accurate estimation of road pavement geometry and layer material properties through the use of proper nondestructive testing and sensor technologies is essential for evaluating pavement’s structural condition and determining options for maintenance and rehabilitation. For these purposes, pavement deflection basins produced by the nondestructive Falling Weight Deflectometer (FWD) test data are commonly used. The nondestructive FWD test drops weights on the pavement to simulate traffic loads and measures the created pavement deflection basins. Backcalculation of pavement geometry and layer properties using FWD deflections is a difficult inverse problem, and the solution with conventional mathematical methods is often challenging due to the ill-posed nature of the problem. In this dissertation, a hybrid algorithm was developed to seek robust and fast solutions to this inverse problem. The algorithm is based on soft computing techniques, mainly Artificial Neural Networks (ANNs) and Genetic Algorithms (GAs) as well as the use of numerical analysis techniques to properly simulate the geomechanical system. A widely used pavement layered analysis program ILLI-PAVE was employed in the analyses of flexible pavements of various pavement types; including full-depth asphalt and conventional flexible pavements, were built on either lime stabilized soils or untreated subgrade. Nonlinear properties of the subgrade soil and the base course aggregate as transportation geomaterials were also considered. A computer program, Soft Computing Based System Identifier or SOFTSYS, was developed. In SOFTSYS, ANNs were used as surrogate models to provide faster solutions of the nonlinear finite element program ILLI-PAVE. The deflections obtained from FWD tests in the field were matched with the predictions obtained from the numerical simulations to develop SOFTSYS models. The solution to the inverse problem for multi-layered pavements is computationally hard to achieve and is often not feasible due to field variability and quality of the collected data. The primary difficulty in the analysis arises from the substantial increase in the degree of non-uniqueness of the mapping from the pavement layer parameters to the FWD deflections. The insensitivity of some layer properties lowered SOFTSYS model performances. Still, SOFTSYS models were shown to work effectively with the synthetic data obtained from ILLI-PAVE finite element solutions. In general, SOFTSYS solutions very closely matched the ILLI-PAVE mechanistic pavement analysis results. For SOFTSYS validation, field collected FWD data were successfully used to predict pavement layer thicknesses and layer moduli of in-service flexible pavements. Some of the very promising SOFTSYS results indicated average absolute errors on the order of 2%, 7%, and 4% for the Hot Mix Asphalt (HMA) thickness estimation of full-depth asphalt pavements, full-depth pavements on lime stabilized soils and conventional flexible pavements, respectively. The field validations of SOFTSYS data also produced meaningful results. The thickness data obtained from Ground Penetrating Radar testing matched reasonably well with predictions from SOFTSYS models. The differences observed in the HMA and lime stabilized soil layer thicknesses observed were attributed to deflection data variability from FWD tests. The backcalculated asphalt concrete layer thickness results matched better in the case of full-depth asphalt flexible pavements built on lime stabilized soils compared to conventional flexible pavements. Overall, SOFTSYS was capable of producing reliable thickness estimates despite the variability of field constructed asphalt layer thicknesses.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Despite the wide swath of applications where multiphase fluid contact lines exist, there is still no consensus on an accurate and general simulation methodology. Most prior numerical work has imposed one of the many dynamic contact-angle theories at solid walls. Such approaches are inherently limited by the theory accuracy. In fact, when inertial effects are important, the contact angle may be history dependent and, thus, any single mathematical function is inappropriate. Given these limitations, the present work has two primary goals: 1) create a numerical framework that allows the contact angle to evolve naturally with appropriate contact-line physics and 2) develop equations and numerical methods such that contact-line simulations may be performed on coarse computational meshes.

Fluid flows affected by contact lines are dominated by capillary stresses and require accurate curvature calculations. The level set method was chosen to track the fluid interfaces because it is easy to calculate interface curvature accurately. Unfortunately, the level set reinitialization suffers from an ill-posed mathematical problem at contact lines: a ``blind spot'' exists. Standard techniques to handle this deficiency are shown to introduce parasitic velocity currents that artificially deform freely floating (non-prescribed) contact angles. As an alternative, a new relaxation equation reinitialization is proposed to remove these spurious velocity currents and its concept is further explored with level-set extension velocities.

To capture contact-line physics, two classical boundary conditions, the Navier-slip velocity boundary condition and a fixed contact angle, are implemented in direct numerical simulations (DNS). DNS are found to converge only if the slip length is well resolved by the computational mesh. Unfortunately, since the slip length is often very small compared to fluid structures, these simulations are not computationally feasible for large systems. To address the second goal, a new methodology is proposed which relies on the volumetric-filtered Navier-Stokes equations. Two unclosed terms, an average curvature and a viscous shear VS, are proposed to represent the missing microscale physics on a coarse mesh.

All of these components are then combined into a single framework and tested for a water droplet impacting a partially-wetting substrate. Very good agreement is found for the evolution of the contact diameter in time between the experimental measurements and the numerical simulation. Such comparison would not be possible with prior methods, since the Reynolds number Re and capillary number Ca are large. Furthermore, the experimentally approximated slip length ratio is well outside of the range currently achievable by DNS. This framework is a promising first step towards simulating complex physics in capillary-dominated flows at a reasonable computational expense.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Network traffic analysis has been one of the most crucial techniques for preserving a large-scale IP backbone network. Despite its importance, large-scale network traffic monitoring techniques suffer from some technical and mercantile issues to obtain precise network traffic data. Though the network traffic estimation method has been the most prevalent technique for acquiring network traffic, it still has a great number of problems that need solving. With the development of the scale of our networks, the level of the ill-posed property of the network traffic estimation problem is more deteriorated. Besides, the statistical features of network traffic have changed greatly in terms of current network architectures and applications. Motivated by that, in this paper, we propose a network traffic prediction and estimation method respectively. We first use a deep learning architecture to explore the dynamic properties of network traffic, and then propose a novel network traffic prediction approach based on a deep belief network. We further propose a network traffic estimation method utilizing the deep belief network via link counts and routing information. We validate the effectiveness of our methodologies by real data sets from the Abilene and GÉANT backbone networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Under certain conditions, the mathematical models governing the melting of nano-sized particles predict unphysical results, which suggests these models are incomplete. This thesis studies the addition of different physical effects to these models, using analytic and numerical techniques to obtain realistic and meaningful results. In particular, the mathematical "blow-up" of solutions to ill-posed Stefan problems is examined, and the regularisation of this blow-up via kinetic undercooling. Other effects such as surface tension, density change and size-dependent latent heat of fusion are also analysed.