963 resultados para Box-constrained optimization
Resumo:
Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.
Resumo:
In this paper, we develop a novel constrained recursive least squares algorithm for adaptively combining a set of given multiple models. With data available in an online fashion, the linear combination coefficients of submodels are adapted via the proposed algorithm.We propose to minimize the mean square error with a forgetting factor, and apply the sum to one constraint to the combination parameters. Moreover an l1-norm constraint to the combination parameters is also applied with the aim to achieve sparsity of multiple models so that only a subset of models may be selected into the final model. Then a weighted l2-norm is applied as an approximation to the l1-norm term. As such at each time step, a closed solution of the model combination parameters is available. The contribution of this paper is to derive the proposed constrained recursive least squares algorithm that is computational efficient by exploiting matrix theory. The effectiveness of the approach has been demonstrated using both simulated and real time series examples.
Resumo:
Field observations of new particle formation and the subsequent particle growth are typically only possible at a fixed measurement location, and hence do not follow the temporal evolution of an air parcel in a Lagrangian sense. Standard analysis for determining formation and growth rates requires that the time-dependent formation rate and growth rate of the particles are spatially invariant; air parcel advection means that the observed temporal evolution of the particle size distribution at a fixed measurement location may not represent the true evolution if there are spatial variations in the formation and growth rates. Here we present a zero-dimensional aerosol box model coupled with one-dimensional atmospheric flow to describe the impact of advection on the evolution of simulated new particle formation events. Wind speed, particle formation rates and growth rates are input parameters that can vary as a function of time and location, using wind speed to connect location to time. The output simulates measurements at a fixed location; formation and growth rates of the particle mode can then be calculated from the simulated observations at a stationary point for different scenarios and be compared with the ‘true’ input parameters. Hence, we can investigate how spatial variations in the formation and growth rates of new particles would appear in observations of particle number size distributions at a fixed measurement site. We show that the particle size distribution and growth rate at a fixed location is dependent on the formation and growth parameters upwind, even if local conditions do not vary. We also show that different input parameters used may result in very similar simulated measurements. Erroneous interpretation of observations in terms of particle formation and growth rates, and the time span and areal extent of new particle formation, is possible if the spatial effects are not accounted for.
Resumo:
Immediate loading of dental implants shortens the treatment time and makes it possible to give the patient an esthetic appearance throughout the treatment period. Placement of dental implants requires precise planning that accounts for anatomic limitations and restorative goals. Diagnosis can be made with the assistance of computerized tomographic scanning, but transfer of planning to the surgical field is limited. Recently, novel CAD/CAM techniques such as stereolithographic rapid prototyping have been developed to build surgical guides in an attempt to improve precision of implant placement. The aim of this case report was to show a modified surgical template used throughout implant placement as an alternative to a conventional surgical guide.
Resumo:
The optimal formulation for the preparation of amaranth flour films plasticized with glycerol and sorbitol was obtained by a multi-response analysis. The optimization aimed to achieve films with higher resistance to break, moderate elongation and lower solubility in water. The influence of plasticizer concentration (Cg, glycerol or Cs, sorbitol) and process temperature (Tp) on the mechanical properties and solubility of the amaranth flour films was initially studied by response surface methodology (RSM). The optimized conditions obtained were Cg 20.02 g glycerol/100 g flour and Tp 75 degrees C, and Cs 29.6 g sorbitol/100 g flour and Tp 75 degrees C. Characterization of the films prepared with these formulations revealed that the optimization methodology employed in this work was satisfactory. Sorbitol was the most suitable plasticizer. It furnished amaranth flour films that were more resistant to break and less permeable to oxygen, due to its greater miscibility with the biopolymers present in the flour and its lower affinity for water. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
A new species of cubozoan jellyfish has been discovered in shallow waters of Bonaire, Netherlands ( Dutch Caribbean). Thus far, approximately 50 sightings of the species, known commonly as the Bonaire banded box jelly, are recorded, and three specimens have been collected. Three physical encounters between humans and the species have been reported. Available evidence suggests that a serious sting is inflicted by this medusa. To increase awareness of the scientific disciplines of systematics and taxonomy, the public has been involved in naming this new species. The Bonaire banded box jelly, Tamoya ohboya, n. sp., can be distinguished from its close relatives T. haplonema from Brazil and T. sp. from the southeastern United States by differences in tentacle coloration, cnidome, and mitochondrial gene sequences. Tamoya ohboya n. sp. possesses striking dark brown to reddish-orange banded tentacles, nematocyst warts that densely cover the animal, and a deep stomach. We provide a detailed comparison of nematocyst data from Tamoya ohboya n. sp., T. haplonema from Brazil, and T. sp. from the Gulf of Mexico.
Resumo:
Human respiratory syncytial virus (HRSV) is the major pathogen leading to respiratory disease in infants and neonates worldwide. An effective vaccine has not yet been developed against this virus, despite considerable efforts in basic and clinical research. HRSV replication is independent of the nuclear RNA processing constraints, since the virus genes are adapted to the cytoplasmic transcription, a process performed by the viral RNA-dependent RNA polymerase. This study shows that meaningful nuclear RNA polymerase II dependent expression of the HRSV nucleoprotein (N) and phosphoprotein (F) proteins can only be achieved with the optimization of their genes, and that the intracellular localization of N and P proteins changes when they are expressed out of the virus replication context. Immunization tests performed in mice resulted in the induction of humoral immunity using the optimized genes. This result was not observed for the non-optimized genes. In conclusion, optimization is a valuable tool for improving expression of HRSV genes in DNA vaccines. (c) 2009 Elsevier B.V. All rights reserved.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
The constrained compartmentalized knapsack problem can be seen as an extension of the constrained knapsack problem. However, the items are grouped into different classes so that the overall knapsack has to be divided into compartments, and each compartment is loaded with items from the same class. Moreover, building a compartment incurs a fixed cost and a fixed loss of the capacity in the original knapsack, and the compartments are lower and upper bounded. The objective is to maximize the total value of the items loaded in the overall knapsack minus the cost of the compartments. This problem has been formulated as an integer non-linear program, and in this paper, we reformulate the non-linear model as an integer linear master problem with a large number of variables. Some heuristics based on the solution of the restricted master problem are investigated. A new and more compact integer linear model is also presented, which can be solved by a branch-and-bound commercial solver that found most of the optimal solutions for the constrained compartmentalized knapsack problem. On the other hand, heuristics provide good solutions with low computational effort. (C) 2011 Elsevier BM. All rights reserved.
Resumo:
Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
The final contents of total and individual trans-fatty acids of sunflower oil, produced during the deacidification step of physical refining were obtained using a computational simulation program that considered cis-trans isomerization reaction features for oleic, linoleic, and linolenic acids attached to the glycerol part of triacylglycerols. The impact of process variables, such as temperature and liquid flow rate, and of equipment configuration parameters, such as liquid height, diameter, and number of stages, that influence the retention time of the oil in the equipment was analyzed using the response-surface methodology (RSM). The computational simulation and the RSM results were used in two different optimization methods, aiming to minimize final levels of total and individual trans-fatty acids (trans-FA), while keeping neutral oil loss and final oil acidity at low values. The main goal of this work was to indicate that computational simulation, based on a careful modeling of the reaction system, combined with optimization could be an important tool for indicating better processing conditions in industrial physical refining plants of vegetable oils, concerning trans-FA formation.
Resumo:
In this work, a sol-gel route was used to prepare Y(0.9)Er(0.1)Al(3)(BO(3))(4) glassy thin films by spin-coating technique looking for the preparation and optimization of planar waveguides for integrated optics. The films were deposited on silica and silicon substrates using stable sols synthesized by the sol-gel process. Deposits with thicknesses ranging between 520 and 720 nm were prepared by a multi-layer process involving heat treatments at different temperatures from glass transition to the film crystallization and using heating rates of 2 degrees C/min. The structural characterization of the layers was performed by using grazing incidence X-ray diffraction and Raman spectroscopy as a function of the heat treatment. Microstructural evolution in terms of annealing temperatures was followed by high resolution scanning electron microscopy and atomic force microscopy. Optical transmission spectra were used to determine the refractive index and the film thicknesses through the envelope method. The optical and guiding properties of the films were studied by m-line spectroscopy. The best films were monomode with 620 nm thickness and a refractive index around 1.664 at 980 nm wavelength. They showed good waveguiding properties with high light-coupling efficiency and low propagation loss at 632.8 and 1550 nm of about 0.88 dB/cm. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper describes the structural evolution of Y(0.9)Er(0.1)Al(3)(BO(3))(4) nanopowders using two soft chemistry routes, the sol-gel and the polymeric precursor methods. Differential scanning calorimetry, differential thermal analyses, thermogravimetric analyses, X-ray diffraction, Fourier-transform infrared, and Raman spectroscopy techniques have been used to study the chemical reactions between 700 and 1200 degrees C temperature range. From both methods the Y(0.9)Er(0.1)Al(3)(BO(3))(4) (Er:YAB) solid solution was obtained almost pure when the powdered samples were heat treated at 1150 degrees C. Based on the results, a schematic phase formation diagram of Er:YAB crystalline solid solution was proposed for powders from each method. The Er:YAB solid solution could be optimized by adding a small amount of boron oxide in excess to the Er:YAB nominal composition. The nanoparticles are obtained around 210 nm. Photoluminescence emission spectrum of the Er:YAB nanocrystalline powders was measured on the infrared region and the Stark components of the (4)I(13/2) and (4)I(15/2) levels were determined. Finally, for the first time the Raman spectrum of Y(0.9)Er(0.1)Al(3)(BO(3))(4) crystalline phase is also presented. (C) 2008 Elsevier Masson SAS. All rights reserved.