999 resultados para Iterative Optimization


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In the present paper, based on the theory of dynamic boundary integral equation, an optimization method for crack identification is set up in the Laplace frequency space, where the direct problem is solved by the author's new type boundary integral equations and a method for choosing the high sensitive frequency region is proposed. The results show that the method proposed is successful in using the information of boundary elastic wave and overcoming the ill-posed difficulties on solution, and helpful to improve the identification precision.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The optimization of the organic modifier concentration in micellar electrokinetic capillary chromatography (MECC) has been achieved by a uniform design and iterative optimization method, which has been developed for the optimization of composition of the mobile phase in high performance liquid chromatography. According to the proposed method, the uniform design technique has been applied to design the starting experiments, which can reduce the number of experiments compared with traditional simultaneous methods, such as the orthano design. The hierarchical chromatographic response function has been modified to evaluate the separation quality of a chromatogram in MECC. An iterative procedure has been adopted to search the optimal concentration of organic modifiers for improving the accuracy of retention predicted and the quality of the chromatogram. Validity of the optimization method has been proved by the separation of 31 aromatic compounds in MECC. (C) 2000 John Wiley & Sons, Inc.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Monte Carlo (MC) method can accurately compute the dose produced by medical linear accelerators. However, these calculations require a reliable description of the electron and/or photon beams delivering the dose, the phase space (PHSP), which is not usually available. A method to derive a phase space model from reference measurements that does not heavily rely on a detailed model of the accelerator head is presented. The iterative optimization process extracts the characteristics of the particle beams which best explains the reference dose measurements in water and air, given a set of constrains

Relevância:

70.00% 70.00%

Publicador:

Resumo:

An inverse optimization strategy was developed to determine the single crystal properties from experimental results of the mechanical behavior of polycrystals. The polycrystal behavior was obtained by means of the finite element simulation of a representative volume element of the microstructure in which the dominant slip and twinning systems were included in the constitutive equation of each grain. The inverse problem was solved by means of the Levenberg-Marquardt method, which provided an excellent fit to the experimental results. The iterative optimization process followed a hierarchical scheme in which simple representative volume elements were initially used, followed by more realistic ones to reach the final optimum solution, leading to important reductions in computer time. The new strategy was applied to identify the initial and saturation critical resolved shear stresses and the hardening modulus of the active slip systems and extension twinning in a textured AZ31 Mg alloy. The results were in general agreement with the data in the literature but also showed some differences. They were partially explained because of the higher accuracy of the new optimization strategy but it was also shown that the number of independent experimental stress-strain curves used as input is critical to reach an accurate solution to the inverse optimization problem. It was concluded that at least three independent stress-strain curves are necessary to determine the single crystal behavior from polycrystal tests in the case of highly textured Mg alloys.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, numerical simulations are used in an attempt to find optimal Source profiles for high frequency radiofrequency (RF) volume coils. Biologically loaded, shielded/unshielded circular and elliptical birdcage coils operating at 170 MHz, 300 MHz and 470 MHz are modelled using the FDTD method for both 2D and 3D cases. Taking advantage of the fact that some aspects of the electromagnetic system are linear, two approaches have been proposed for the determination of the drives for individual elements in the RF resonator. The first method is an iterative optimization technique with a kernel for the evaluation of RF fields inside an imaging plane of a human head model using pre-characterized sensitivity profiles of the individual rungs of a resonator; the second method is a regularization-based technique. In the second approach, a sensitivity matrix is explicitly constructed and a regularization procedure is employed to solve the ill-posed problem. Test simulations show that both methods can improve the B-1-field homogeneity in both focused and non-focused scenarios. While the regularization-based method is more efficient, the first optimization method is more flexible as it can take into account other issues such as controlling SAR or reshaping the resonator structures. It is hoped that these schemes and their extensions will be useful for the determination of multi-element RF drives in a variety of applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the present paper, the crack identification problems are investigated. This kind of problems belong to the scope of inverse problems and are usually ill-posed on their solutions. The paper includes two parts: (1) Based on the dynamic BIEM and the optimization method and using the measured dynamic information on outer boundary, the identification of crack in a finite domain is investigated and a method for choosing the high sensitive frequency region is proposed successfully to improve the precision. (2) Based on 3-D static BIEM and hypersingular integral equation theory, the penny crack identification in a finite body is reduced to an optimization problem. The investigation gives us some initial understanding on the 3-D inverse problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A parallel method for the dynamic partitioning of unstructured meshes is described. The method introduces a new iterative optimization technique known as relative gain optimization which both balances the workload and attempts to minimize the interprocessor communications overhead. Experiments on a series of adaptively refined meshes indicate that the algorithm provides partitions of an equivalent or higher quality to static partitioners (which do not reuse the existing partition) and much more rapidly. Perhaps more importantly, the algorithm results in only a small fraction of the amount of data migration compared to the static partitioners.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Prediction of patient outcomes is critical to plan resources in an hospital emergency department. We present a method to exploit longitudinal data from Electronic Medical Records (EMR), whilst exploiting multiple patient outcomes. We divide the EMR data into segments where each segment is a task, and all tasks are associated with multiple patient outcomes over a 3, 6 and 12 month period. We propose a model that learns a prediction function for each task-label pair, interacting through two subspaces: the first subspace is used to impose sharing across all tasks for a given label. The second subspace captures the task-specific variations and is shared across all the labels for a given task. The proposed model is formulated as an iterative optimization problems and solved using a scalable and efficient Block co-ordinate descent (BCD) method. We apply the proposed model on two hospital cohorts - Cancer and Acute Myocardial Infarction (AMI) patients collected over a two year period from a large hospital emergency department. We show that the predictive performance of our proposed models is significantly better than those of several state-of-the-art multi-task and multi-label learning methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis deal with the design of advanced OFDM systems. Both waveform and receiver design have been treated. The main scope of the Thesis is to study, create, and propose, ideas and novel design solutions able to cope with the weaknesses and crucial aspects of modern OFDM systems. Starting from the the transmitter side, the problem represented by low resilience to non-linear distortion has been assessed. A novel technique that considerably reduces the Peak-to-Average Power Ratio (PAPR) yielding a quasi constant signal envelope in the time domain (PAPR close to 1 dB) has been proposed.The proposed technique, named Rotation Invariant Subcarrier Mapping (RISM),is a novel scheme for subcarriers data mapping,where the symbols belonging to the modulation alphabet are not anchored, but maintain some degrees of freedom. In other words, a bit tuple is not mapped on a single point, rather it is mapped onto a geometrical locus, which is totally or partially rotation invariant. The final positions of the transmitted complex symbols are chosen by an iterative optimization process in order to minimize the PAPR of the resulting OFDM symbol. Numerical results confirm that RISM makes OFDM usable even in severe non-linear channels. Another well known problem which has been tackled is the vulnerability to synchronization errors. Indeed in OFDM system an accurate recovery of carrier frequency and symbol timing is crucial for the proper demodulation of the received packets. In general, timing and frequency synchronization is performed in two separate phases called PRE-FFT and POST-FFT synchronization. Regarding the PRE-FFT phase, a novel joint symbol timing and carrier frequency synchronization algorithm has been presented. The proposed algorithm is characterized by a very low hardware complexity, and, at the same time, it guarantees very good performance in in both AWGN and multipath channels. Regarding the POST-FFT phase, a novel approach for both pilot structure and receiver design has been presented. In particular, a novel pilot pattern has been introduced in order to minimize the occurrence of overlaps between two pattern shifted replicas. This allows to replace conventional pilots with nulls in the frequency domain, introducing the so called Silent Pilots. As a result, the optimal receiver turns out to be very robust against severe Rayleigh fading multipath and characterized by low complexity. Performance of this approach has been analytically and numerically evaluated. Comparing the proposed approach with state of the art alternatives, in both AWGN and multipath fading channels, considerable performance improvements have been obtained. The crucial problem of channel estimation has been thoroughly investigated, with particular emphasis on the decimation of the Channel Impulse Response (CIR) through the selection of the Most Significant Samples (MSSs). In this contest our contribution is twofold, from the theoretical side, we derived lower bounds on the estimation mean-square error (MSE) performance for any MSS selection strategy,from the receiver design we proposed novel MSS selection strategies which have been shown to approach these MSE lower bounds, and outperformed the state-of-the-art alternatives. Finally, the possibility of using of Single Carrier Frequency Division Multiple Access (SC-FDMA) in the Broadband Satellite Return Channel has been assessed. Notably, SC-FDMA is able to improve the physical layer spectral efficiency with respect to single carrier systems, which have been used so far in the Return Channel Satellite (RCS) standards. However, it requires a strict synchronization and it is also sensitive to phase noise of local radio frequency oscillators. For this reason, an effective pilot tone arrangement within the SC-FDMA frame, and a novel Joint Multi-User (JMU) estimation method for the SC-FDMA, has been proposed. As shown by numerical results, the proposed scheme manages to satisfy strict synchronization requirements and to guarantee a proper demodulation of the received signal.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reconstruction of shape and intensity from 2D x-ray images has drawn more and more attentions. Previously introduced work suffers from the long computing time due to its iterative optimization characteristics and the requirement of generating digitally reconstructed radiographs within each iteration. In this paper, we propose a novel method which uses a patient-specific 3D surface model reconstructed from 2D x-ray images as a surrogate to get a patient-specific volumetric intensity reconstruction via partial least squares regression. No DRR generation is needed. The method was validated on 20 cadaveric proximal femurs by performing a leave-one-out study. Qualitative and quantitative results demonstrated the efficacy of the present method. Compared to the existing work, the present method has the advantage of much shorter computing time and can be applied to both DXA images as well as conventional x-ray images, which may hold the potentials to be applied to clinical routine task such as total hip arthroplasty (THA).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis deals with the problem of efficiently tracking 3D objects in sequences of images. We tackle the efficient 3D tracking problem by using direct image registration. This problem is posed as an iterative optimization procedure that minimizes a brightness error norm. We review the most popular iterative methods for image registration in the literature, turning our attention to those algorithms that use efficient optimization techniques. Two forms of efficient registration algorithms are investigated. The first type comprises the additive registration algorithms: these algorithms incrementally compute the motion parameters by linearly approximating the brightness error function. We centre our attention on Hager and Belhumeur’s factorization-based algorithm for image registration. We propose a fundamental requirement that factorization-based algorithms must satisfy to guarantee good convergence, and introduce a systematic procedure that automatically computes the factorization. Finally, we also bring out two warp functions to register rigid and nonrigid 3D targets that satisfy the requirement. The second type comprises the compositional registration algorithms, where the brightness function error is written by using function composition. We study the current approaches to compositional image alignment, and we emphasize the importance of the Inverse Compositional method, which is known to be the most efficient image registration algorithm. We introduce a new algorithm, the Efficient Forward Compositional image registration: this algorithm avoids the necessity of inverting the warping function, and provides a new interpretation of the working mechanisms of the inverse compositional alignment. By using this information, we propose two fundamental requirements that guarantee the convergence of compositional image registration methods. Finally, we support our claims by using extensive experimental testing with synthetic and real-world data. We propose a distinction between image registration and tracking when using efficient algorithms. We show that, depending whether the fundamental requirements are hold, some efficient algorithms are eligible for image registration but not for tracking.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a registration method for images with global illumination variations. The method is based on a joint iterative optimization (geometric and photometric) of the L1 norm of the intensity error. Two strategies are compared to directly find the appropriate intensity transformation within each iteration: histogram specification and the solution obtained by analyzing the necessary optimality conditions. Such strategies reduce the search space of the joint optimization to that of the geometric transformation between the images.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This chapter describes a parallel optimization technique that incorporates a distributed load-balancing algorithm and provides an extremely fast solution to the problem of load-balancing adaptive unstructured meshes. Moreover, a parallel graph contraction technique can be employed to enhance the partition quality and the resulting strategy outperforms or matches results from existing state-of-the-art static mesh partitioning algorithms. The strategy can also be applied to static partitioning problems. Dynamic procedures have been found to be much faster than static techniques, to provide partitions of similar or higher quality and, in comparison, involve the migration of a fraction of the data. The method employs a new iterative optimization technique that balances the workload and attempts to minimize the interprocessor communications overhead. Experiments on a series of adaptively refined meshes indicate that the algorithm provides partitions of an equivalent or higher quality to static partitioners (which do not reuse the existing partition) and much more quickly. The dynamic evolution of load has three major influences on possible partitioning techniques; cost, reuse, and parallelism. The unstructured mesh may be modified every few time-steps and so the load-balancing must have a low cost relative to that of the solution algorithm in between remeshing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Health analysis often involves prediction of multiple outcomes of mixed-type. Existing work is restrictive to either a limited number or specific outcome types. We propose a framework for mixed-type multi-outcome prediction. Our proposed framework proposes a cumulative loss function composed of a specific loss function for each outcome type - as an example, least square (continuous outcome), hinge (binary outcome), poisson (count outcome) and exponential (non-negative outcome). Tomodel these outcomes jointly, we impose a commonality across the prediction parameters through a common matrix-Normal prior. The framework is formulated as iterative optimization problems and solved using an efficient Block coordinate descent method (BCD). We empirically demonstrate both scalability and convergence. We apply the proposed model to a synthetic dataset and then on two real-world cohorts: a Cancer cohort and an Acute Myocardial Infarction cohort collected over a two year period. We predict multiple emergency related outcomes - as example, future emergency presentations (binary), emergency admissions (count), emergency length-of-stay-days (non-negative) and emergency time-to-next-admission-day (non-negative). Weshow that the predictive performance of the proposed model is better than several state-of-the-art baselines.