958 resultados para Well-Posed Problem


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The integration of geo-information from multiple sources and of diverse nature in developing mineral favourability indexes (MFIs) is a well-known problem in mineral exploration and mineral resource assessment. Fuzzy set theory provides a convenient framework to combine and analyse qualitative and quantitative data independently of their source or characteristics. A novel, data-driven formulation for calculating MFIs based on fuzzy analysis is developed in this paper. Different geo-variables are considered fuzzy sets and their appropriate membership functions are defined and modelled. A new weighted average-type aggregation operator is then introduced to generate a new fuzzy set representing mineral favourability. The membership grades of the new fuzzy set are considered as the MFI. The weights for the aggregation operation combine the individual membership functions of the geo-variables, and are derived using information from training areas and L, regression. The technique is demonstrated in a case study of skarn tin deposits and is used to integrate geological, geochemical and magnetic data. The study area covers a total of 22.5 km(2) and is divided into 349 cells, which include nine control cells. Nine geo-variables are considered in this study. Depending on the nature of the various geo-variables, four different types of membership functions are used to model the fuzzy membership of the geo-variables involved. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, numerical simulations are used in an attempt to find optimal Source profiles for high frequency radiofrequency (RF) volume coils. Biologically loaded, shielded/unshielded circular and elliptical birdcage coils operating at 170 MHz, 300 MHz and 470 MHz are modelled using the FDTD method for both 2D and 3D cases. Taking advantage of the fact that some aspects of the electromagnetic system are linear, two approaches have been proposed for the determination of the drives for individual elements in the RF resonator. The first method is an iterative optimization technique with a kernel for the evaluation of RF fields inside an imaging plane of a human head model using pre-characterized sensitivity profiles of the individual rungs of a resonator; the second method is a regularization-based technique. In the second approach, a sensitivity matrix is explicitly constructed and a regularization procedure is employed to solve the ill-posed problem. Test simulations show that both methods can improve the B-1-field homogeneity in both focused and non-focused scenarios. While the regularization-based method is more efficient, the first optimization method is more flexible as it can take into account other issues such as controlling SAR or reshaping the resonator structures. It is hoped that these schemes and their extensions will be useful for the determination of multi-element RF drives in a variety of applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We address the question of how to obtain effective fusion of identification information such that it is robust to the quality of this information. As well as technical issues data fusion is encumbered with a collection of (potentially confusing) practical considerations. These considerations are described during the early chapters in which a framework for data fusion is developed. Following this process of diversification it becomes clear that the original question is not well posed and requires more precise specification. We use the framework to focus on some of the technical issues relevant to the question being addressed. We show that fusion of hard decisions through use of an adaptive version of the maximum a posteriori decision rule yields acceptable performance. Better performance is possible using probability level fusion as long as the probabilities are accurate. Of particular interest is the prevalence of overconfidence and the effect it has on fused performance. The production of accurate probabilities from poor quality data forms the latter part of the thesis. Two approaches are taken. Firstly the probabilities may be moderated at source (either analytically or numerically). Secondly, the probabilities may be transformed at the fusion centre. In each case an improvement in fused performance is demonstrated. We therefore conclude that in order to obtain robust fusion care should be taken to model the probabilities accurately; either at the source or centrally.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An iterative procedure is proposed for the reconstruction of a stationary temperature field from Cauchy data given on a part of the boundary of a bounded plane domain where the boundary is smooth except for a finite number of corner points. In each step, a series of mixed well-posed boundary value problems are solved for the heat operator and its adjoint. Convergence is proved in a weighted L2-space. Numerical results are included which show that the procedure gives accurate and stable approximations in relatively few iterations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An iterative method for the reconstruction of a stationary three-dimensional temperature field, from Cauchy data given on a part of the boundary, is presented. At each iteration step, a series of mixed well-posed boundary value problems are solved for the heat operator and its adjoint. A convergence proof of this method in a weighted L 2-space is include

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We prove that in some classes of optimization problems, like lower semicontinuous functions which are bounded from below, lower semi-continuous or continuous functions which are bounded below by a coercive function and quasi-convex continuous functions with the topology of the uniform convergence, the complement of the set of well-posed problems is σ-porous. These results are obtained as realization of a theorem extending a variational principle of Ioffe-Zaslavski.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 13N15, 13A50, 16W25.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose and investigate an application of the method of fundamental solutions (MFS) to the radially symmetric and axisymmetric backward heat conduction problem (BHCP) in a solid or hollow cylinder. In the BHCP, the initial temperature is to be determined from the temperature measurements at a later time. This is an inverse and ill-posed problem, and we employ and generalize the MFS regularization approach [B.T. Johansson and D. Lesnic, A method of fundamental solutions for transient heat conduction, Eng. Anal. Boundary Elements 32 (2008), pp. 697–703] for the time-dependent heat equation to obtain a stable and accurate numerical approximation with small computational cost.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose and investigate an application of the method of fundamental solutions (MFS) to the radially symmetric and axisymmetric backward heat conduction problem (BHCP) in a solid or hollow cylinder. In the BHCP, the initial temperature is to be determined from the temperature measurements at a later time. This is an inverse and ill-posed problem, and we employ and generalize the MFS regularization approach [B.T. Johansson and D. Lesnic, A method of fundamental solutions for transient heat conduction, Eng. Anal. Boundary Elements 32 (2008), pp. 697–703] for the time-dependent heat equation to obtain a stable and accurate numerical approximation with small computational cost.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Argon infiltration is a well-known problem of hot isostatic pressed components. Thus, the argon content is one quality attribute which is measured after a hot isostatic pressing (HIP) process. Since the Selective Laser Melting (SLM) process takes place under an inert argon atmosphere; it is imaginable that argon is entrapped in the component after SLM processing. Despite using optimized process parameters, defects like pores and shrink holes cannot be completely avoided. Especially, pores could be filled with process gas during the building process. Argon filled pores would clearly affect the mechanical properties. The present paper takes a closer look at the porosity in Inconel 718 samples, which were generated by means of SLM. Furthermore, the argon content of the powder feedstock, of samples made by means of SLM, of samples which were hot isostatic pressed after the SLM process, and of conventionally manufactured samples were measured and compared. The results showed an increased argon content in the Inconel 718 samples after SLM processing compared to conventional manufactured samples.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A deterministic model of tuberculosis in Cameroon is designed and analyzed with respect to its transmission dynamics. The model includes lack of access to treatment and weak diagnosis capacity as well as both frequency-and density-dependent transmissions. It is shown that the model is mathematically well-posed and epidemiologically reasonable. Solutions are non-negative and bounded whenever the initial values are non-negative. A sensitivity analysis of model parameters is performed and the most sensitive ones are identified by means of a state-of-the-art Gauss-Newton method. In particular, parameters representing the proportion of individuals having access to medical facilities are seen to have a large impact on the dynamics of the disease. The model predicts that a gradual increase of these parameters could significantly reduce the disease burden on the population within the next 15 years.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

INTRODUCTION: In common with much of the developed world, Scotland has a severe and well established problem with overweight and obesity in childhood with recent figures demonstrating that 31% of Scottish children aged 2-15 years old were overweight including obese in 2014. This problem is more pronounced in socioeconomically disadvantaged groups and in older children across all economic groups (Scottish Health Survey, 2014). Children who are overweight or obese are at increased risk of a number of adverse health outcomes in the short term and throughout their life course (Lobstein and Jackson-Leach, 2006). The Scottish Government tasked all Scottish Health Boards with developing and delivering child healthy weight interventions to clinically overweight or obese children in an attempt to address this health problem. It is therefore imperative to deliver high quality, affordable, appropriately targeted interventions which can make a sustained impact on children’s lifestyles, setting them up for life as healthy weight adults. This research aimed to inform the design, readiness for application and Health Board suitability of an effective primary school-based curricular child healthy weight intervention. METHODS: the process involved in conceptualising a child healthy weight intervention, developing the intervention, planning for implementation and subsequent evaluation was guided by the PRECEDE-PROCEED Model (Green and Kreuter, 2005) and the Intervention Mapping protocol (Lloyd et al. 2011). RESULTS: The outputs from each stage of the development process were used to formulate a child healthy weight intervention conceptual model then develop plans for delivery and evaluation. DISCUSSION: The Fit for School conceptual model developed through this process has the potential to theoretically modify energy balance related behaviours associated with unhealthy weight gain in childhood. It also has the potential to be delivered at a Health Board scale within current organisational restrictions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Image (Video) retrieval is an interesting problem of retrieving images (videos) similar to the query. Images (Videos) are represented in an input (feature) space and similar images (videos) are obtained by finding nearest neighbors in the input representation space. Numerous input representations both in real valued and binary space have been proposed for conducting faster retrieval. In this thesis, we present techniques that obtain improved input representations for retrieval in both supervised and unsupervised settings for images and videos. Supervised retrieval is a well known problem of retrieving same class images of the query. We address the practical aspects of achieving faster retrieval with binary codes as input representations for the supervised setting in the first part, where binary codes are used as addresses into hash tables. In practice, using binary codes as addresses does not guarantee fast retrieval, as similar images are not mapped to the same binary code (address). We address this problem by presenting an efficient supervised hashing (binary encoding) method that aims to explicitly map all the images of the same class ideally to a unique binary code. We refer to the binary codes of the images as `Semantic Binary Codes' and the unique code for all same class images as `Class Binary Code'. We also propose a new class­ based Hamming metric that dramatically reduces the retrieval times for larger databases, where only hamming distance is computed to the class binary codes. We also propose a Deep semantic binary code model, by replacing the output layer of a popular convolutional Neural Network (AlexNet) with the class binary codes and show that the hashing functions learned in this way outperforms the state­ of ­the art, and at the same time provide fast retrieval times. In the second part, we also address the problem of supervised retrieval by taking into account the relationship between classes. For a given query image, we want to retrieve images that preserve the relative order i.e. we want to retrieve all same class images first and then, the related classes images before different class images. We learn such relationship aware binary codes by minimizing the similarity between inner product of the binary codes and the similarity between the classes. We calculate the similarity between classes using output embedding vectors, which are vector representations of classes. Our method deviates from the other supervised binary encoding schemes as it is the first to use output embeddings for learning hashing functions. We also introduce new performance metrics that take into account the related class retrieval results and show significant gains over the state­ of­ the art. High Dimensional descriptors like Fisher Vectors or Vector of Locally Aggregated Descriptors have shown to improve the performance of many computer vision applications including retrieval. In the third part, we will discuss an unsupervised technique for compressing high dimensional vectors into high dimensional binary codes, to reduce storage complexity. In this approach, we deviate from adopting traditional hyperplane hashing functions and instead learn hyperspherical hashing functions. The proposed method overcomes the computational challenges of directly applying the spherical hashing algorithm that is intractable for compressing high dimensional vectors. A practical hierarchical model that utilizes divide and conquer techniques using the Random Select and Adjust (RSA) procedure to compress such high dimensional vectors is presented. We show that our proposed high dimensional binary codes outperform the binary codes obtained using traditional hyperplane methods for higher compression ratios. In the last part of the thesis, we propose a retrieval based solution to the Zero shot event classification problem - a setting where no training videos are available for the event. To do this, we learn a generic set of concept detectors and represent both videos and query events in the concept space. We then compute similarity between the query event and the video in the concept space and videos similar to the query event are classified as the videos belonging to the event. We show that we significantly boost the performance using concept features from other modalities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wydział Studiów Edukacyjnych: Zakład Pedeutologii

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spiking neural networks - networks that encode information in the timing of spikes - are arising as a new approach in the artificial neural networks paradigm, emergent from cognitive science. One of these new models is the pulsed neural network with radial basis function, a network able to store information in the axonal propagation delay of neurons. Learning algorithms have been proposed to this model looking for mapping input pulses into output pulses. Recently, a new method was proposed to encode constant data into a temporal sequence of spikes, stimulating deeper studies in order to establish abilities and frontiers of this new approach. However, a well known problem of this kind of network is the high number of free parameters - more that 15 - to be properly configured or tuned in order to allow network convergence. This work presents for the first time a new learning function for this network training that allow the automatic configuration of one of the key network parameters: the synaptic weight decreasing factor.