26 resultados para Maximizing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The application of automatic segmentation methods in lesion detection is desirable. However, such methods are restricted by intensity similarities between lesioned and healthy brain tissue. Using multi-spectral magnetic resonance imaging (MRI) modalities may overcome this problem but it is not always practicable. In this article, a lesion detection approach requiring a single MRI modality is presented, which is an improved method based on a recent publication. This new method assumes that a low similarity should be found in the regions of lesions when the likeness between an intensity based fuzzy segmentation and a location based tissue probabilities is measured. The usage of a normalized similarity measurement enables the current method to fine-tune the threshold for lesion detection, thus maximizing the possibility of reaching high detection accuracy. Importantly, an extra cleaning step is included in the current approach which removes enlarged ventricles from detected lesions. The performance investigation using simulated lesions demonstrated that not only the majority of lesions were well detected but also normal tissues were identified effectively. Tests on images acquired in stroke patients further confirmed the strength of the method in lesion detection. When compared with the previous version, the current approach showed a higher sensitivity in detecting small lesions and had less false positives around the ventricle and the edge of the brain

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our differences are three. The first arises from the belief that "... a nonzero value for the optimally chosen policy instrument implies that the instrument is efficient for redistribution" (Alston, Smith, and Vercammen, p. 543, paragraph 3). Consider the two equations: (1) o* = f(P3) and (2) = -f(3) ++r h* (a, P3) representing the solution to the problem of maximizing weighted, Marshallian surplus using, simultaneously, a per-unit border intervention, 9, and a per-unit domestic intervention, wr. In the solution, parameter ot denotes the weight applied to producer surplus; parameter p denotes the weight applied to government revenues; consumer surplus is implicitly weighted one; and the country in question is small in the sense that it is unable to affect world price by any of its domestic adjustments (see the Appendix). Details of the forms of the functions f((P) and h(ot, p) are easily derived, but what matters in the context of Alston, Smith, and Vercammen's Comment is: Redistributivep referencest hatf avorp roducers are consistent with higher values "alpha," and whereas the optimal domestic intervention, 7r*, has both "alpha and beta effects," the optimal border intervention, r*, has only a "beta effect,"-it does not have a redistributional role. Garth Holloway is reader in agricultural economics and statistics, Department of Agricultural and Food Economics, School of Agriculture, Policy, and Development, University of Reading. The author is very grateful to Xavier Irz, Bhavani Shankar, Chittur Srinivasan, Colin Thirtle, and Richard Tiffin for their comments and their wisdom; and to Mario Mazzochi, Marinos Tsigas, and Cal Turvey for their scholarship, including help in tracking down a fairly complete collection of the papers that cite Alston and Hurd. They are not responsible for any errors or omissions. Note, in equation (1), that the border intervention is positive whenever a distortion exists because 8 > 0 implies 3 - 1 + 8 > 1 and, thus, f((P) > 0 (see Appendix). Using Alston, Smith, and Vercammen's definition, the instrument is now "efficient," and therefore has a redistributive role. But now, suppose that the distortion is removed so that 3 - 1 + 8 = 1, 8 = 0, and consequently the border intervention is zero. According to Alston, Smith, and Vercammen, the instrument is now "inefficient" and has no redistributive role. The reader will note that this thought experiment has said nothing about supporting farm incomes, and so has nothing whatsoever to do with efficient redistribution. Of course, the definition is false. It follows that a domestic distortion arising from the "excess-burden argument" 3 = 1 + 8, 8 > 0 does not make an export subsidy "efficient." The export subsidy, having only a "beta effect," does not have a redistributional role. The second disagreement emerges from the comment that Holloway "... uses an idiosyncratic definition of the relevant objective function of the government (Alston, Smith, and Vercammen, p. 543, paragraph 2)." The objective function that generates equations (1) and (2) (see the Appendix) is the same as the objective function used by Gardner (1995) when he first questioned Alston, Carter, and Smith's claim that a "domestic distortion can make a border intervention efficient in transferring surplus from consumers and taxpayers to farmers." The objective function used by Gardner (1995) is the same objective function used in the contributions that precede it and thus defines the literature on the debate about borderversus- domestic intervention (Streeten; Yeh; Paarlberg 1984, 1985; Orden; Gardner 1985). The objective function in the latter literature is the same as the one implied in another literature that originates from Wallace and includes most notably Gardner (1983), but also Alston and Hurd. Amer. J. Agr. Econ. 86(2) (May 2004): 549-552 Copyright 2004 American Agricultural Economics Association This content downloaded on Tue, 15 Jan 2013 07:58:41 AM All use subject to JSTOR Terms and Conditions 550 May 2004 Amer. J. Agr. Econ. The objective function in Holloway is this same objective function-it is, of course, Marshallian surplus.1 The third disagreement concerns scholarship. The Comment does not seem to be cognizant of several important papers, especially Bhagwati and Ramaswami, and Bhagwati, both of which precede Corden (1974, 1997); but also Lipsey and Lancaster, and Moschini and Sckokai; one important aspect of Alston and Hurd; and one extremely important result in Holloway. This oversight has some unfortunate repercussions. First, it misdirects to the wrong origins of intellectual property. Second, it misleads about the appropriateness of some welfare calculations. Third, it prevents Alston, Smith, and Vercammen from linking a finding in Holloway (pp. 242-43) with an old theorem (Lipsey and Lancaster) that settles the controversy (Alston, Carter, and Smith 1993, 1995; Gardner 1995; and, presently, Alston, Smith, and Vercammen) about the efficiency of border intervention in the presence of domestic distortions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A two-stage linear-in-the-parameter model construction algorithm is proposed aimed at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linear-in-the-parameter classifier. The prefiltering stage is a two-level process aimed at maximizing a model's generalization capability, in which a new elastic-net model identification algorithm using singular value decomposition is employed at the lower level, and then, two regularization parameters are optimized using a particle-swarm-optimization algorithm at the upper level by minimizing the leave-one-out (LOO) misclassification rate. It is shown that the LOO misclassification rate based on the resultant prefiltered signal can be analytically computed without splitting the data set, and the associated computational cost is minimal due to orthogonality. The second stage of sparse classifier construction is based on orthogonal forward regression with the D-optimality algorithm. Extensive simulations of this approach for noisy data sets illustrate the competitiveness of this approach to classification of noisy data problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Details are given of the development and application of a 2D depth-integrated, conformal boundary-fitted, curvilinear model for predicting the depth-mean velocity field and the spatial concentration distribution in estuarine and coastal waters. A numerical method for conformal mesh generation, based on a boundary integral equation formulation, has been developed. By this method a general polygonal region with curved edges can be mapped onto a regular polygonal region with the same number of horizontal and vertical straight edges and a multiply connected region can be mapped onto a regular region with the same connectivity. A stretching transformation on the conformally generated mesh has also been used to provide greater detail where it is needed close to the coast, with larger mesh sizes further offshore, thereby minimizing the computing effort whilst maximizing accuracy. The curvilinear hydrodynamic and solute model has been developed based on a robust rectilinear model. The hydrodynamic equations are approximated using the ADI finite difference scheme with a staggered grid and the solute transport equation is approximated using a modified QUICK scheme. Three numerical examples have been chosen to test the curvilinear model, with an emphasis placed on complex practical applications

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a new sparse model construction method aimed at maximizing a model’s generalisation capability for a large class of linear-in-the-parameters models. The coordinate descent optimization algorithm is employed with a modified l1- penalized least squares cost function in order to estimate a single parameter and its regularization parameter simultaneously based on the leave one out mean square error (LOOMSE). Our original contribution is to derive a closed form of optimal LOOMSE regularization parameter for a single term model, for which we show that the LOOMSE can be analytically computed without actually splitting the data set leading to a very simple parameter estimation method. We then integrate the new results within the coordinate descent optimization algorithm to update model parameters one at the time for linear-in-the-parameters models. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the increasing number of studies examining the correlates of interest and boredom, surprisingly little research has focused on within-person fluctuations in these emotions, making it difficult to describe their situational nature. To address this gap in the literature, this study conducted repeated measurements (12 times) on a sample of 158 undergraduate students using a variety of self-report assessments, and examined the within-person relationships between task-specific perceptions (expectancy, utility, and difficulty) and interest and boredom. This study further explored the role of achievement goals in predicting between-person differences in these within-person relationships. Utilizing hierarchical-linear modeling, we found that, on average, a higher perception of both expectancy and utility, as well as a lower perception of difficulty, was associated with higher interest and lower boredom levels within individuals. Moreover, mastery-approach goals weakened the negative within-person relationship between difficulty and interest and the negative within-person relationship between utility and boredom. Mastery-avoidance and performance-avoidance goals strengthened the negative relationship between expectancy and boredom. These results suggest how educators can more effectively instruct students with different types of goals, minimizing boredom and maximizing interest and learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A class identification algorithms is introduced for Gaussian process(GP)models.The fundamental approach is to propose a new kernel function which leads to a covariance matrix with low rank,a property that is consequently exploited for computational efficiency for both model parameter estimation and model predictions.The objective of either maximizing the marginal likelihood or the Kullback–Leibler (K–L) divergence between the estimated output probability density function(pdf)and the true pdf has been used as respective cost functions.For each cost function,an efficient coordinate descent algorithm is proposed to estimate the kernel parameters using a one dimensional derivative free search, and noise variance using a fast gradient descent algorithm. Numerical examples are included to demonstrate the effectiveness of the new identification approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a new class of neurofuzzy construction algorithms with the aim of maximizing generalization capability specifically for imbalanced data classification problems based on leave-one-out (LOO) cross validation. The algorithms are in two stages, first an initial rule base is constructed based on estimating the Gaussian mixture model with analysis of variance decomposition from input data; the second stage carries out the joint weighted least squares parameter estimation and rule selection using orthogonal forward subspace selection (OFSS)procedure. We show how different LOO based rule selection criteria can be incorporated with OFSS, and advocate either maximizing the leave-one-out area under curve of the receiver operating characteristics, or maximizing the leave-one-out Fmeasure if the data sets exhibit imbalanced class distribution. Extensive comparative simulations illustrate the effectiveness of the proposed algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optical observations of a dayside auroral brightening sequence, by means of all-sky TV cameras and meridian scanning photometers, have been combined with EISCAT ion drift observations within the same invariant latitude-MLT sector. The observations were made during a January 1989 campaign by utilizing the high F region ion densities during the maximum phase of the solar cycle. The characteristic intermittent optical events, covering ∼300 km in east-west extent, move eastward (antisunward) along the poleward boundary of the persistent background aurora at velocities of ∼1.5 km s−1 and are associated with ion flows which swing from eastward to westward, with a subsequent return to eastward, during the interval of a few minutes when there is enhanced auroral emission within the radar field of view. The breakup of discrete auroral forms occurs at the reversal (negative potential) that forms between eastward plasma flow, maximizing near the persistent arc poleward boundary, and strong transient westward flow to the south. The reported events, covering a 35 min interval around 1400 MLT, are embedded within a longer period of similar auroral activity between 0830 (1200 MLT) and 1300 UT (1600 MLT). These observations are discussed in relation to recent models of boundary layer plasma dynamics and the associated magnetosphere-ionosphere coupling. The ionospheric events may correspond to large-scale wave like motions of the low-latitude boundary layer (LLBL)/plasma sheet (PS) boundary. On the basis of this interpretation the observed spot size, speed and repetition period (∼10 min) give a wavelength (the distance between spots) of ∼900 km in the present case. The events can also be explained as ionospheric signatures of newly opened flux tubes associated with reconnection bursts at the magnetopause near 1400 MLT. We also discuss these data in relation to random, patchy reconnection (as has recently been invoked to explain the presence of the sheathlike plasma on closed field lines in the LLBL). In view of the lack of IMF data, and the existing uncertainty on the location of the open-closed field line boundary relative to the optical events, an unambiguous discrimination between the different alternatives is not easily obtained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we study the problem of maximizing a quadratic form 〈Ax,x〉 subject to ‖x‖q=1, where A has matrix entries View the MathML source with i,j|k and q≥1. We investigate when the optimum is achieved at a ‘multiplicative’ point; i.e. where x1xmn=xmxn. This turns out to depend on both f and q, with a marked difference appearing as q varies between 1 and 2. We prove some partial results and conjecture that for f multiplicative such that 0