946 resultados para Tree solution method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

MgTiO3 (MTO) thin films were prepared by the polymeric precursor method with posterior spin-coating deposition. The films were deposited on Pt(111)/Ti/SiO2/Si(100) substrates and heat treated at 350 degrees C for 2 h and then heat treated at 400, 450, 500, 550, 600, 650 and 700 C for 2 h. The degree of structural order disorder, optical properties, and morphology of the MTO thin films were investigated by X-ray diffraction (XRD), micro-Raman spectroscopy (MR), ultraviolet-visible (UV-vis) absorption spectroscopy, photoluminescence (PL) measurements, and field-emission gun scanning electron microscopy (FEG-SEM) to investigate the morphology. XRD revealed that an increase in the annealing temperature resulted in a structural organization of MTO thin films. First-principles quantum mechanical calculations based on density functional theory (B3LYP level) were employed to study the electronic structure of ordered and disordered asymmetric models. The electronic properties were analyzed, and the relevance of the present theoretical and experimental results was discussed in the light of PL behavior. The presence of localized electronic levels and a charge gradient in the band gap due to a break in the symmetry are responsible for the PL in disordered MTO lattice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study is presented an economic optimization method to design telescope irrigation laterals (multidiameter) with regular spaced outlets. The proposed analytical hydraulic solution was validated by means of a pipeline composed of three different diameters. The minimum acquisition cost of the telescope pipeline was determined by an ideal arrangement of lengths and respective diameters for each one of the three segments. The mathematical optimization method based on the Lagrange multipliers provides a strategy for finding the maximum or minimum of a function subject to certain constraints. In this case, the objective function describes the acquisition cost of pipes, and the constraints are determined from hydraulic parameters as length of irrigation laterals and total head loss permitted. The developed analytical solution provides the ideal combination of each pipe segment length and respective diameter, resulting in a decreased of the acquisition cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to investigate the possibility of using hydric restriction as a method for evaluating vigor of soybean seeds. The soybean seeds, cultivar BRS 245RR, represented by four different seed lots, were characterized by germination and vigor. For the treatment of hydric restriction and temperature, the combination of substrate water potential and temperature were the following: deionized water (0.0 MPa); polyethylene glycol (PEG 6000) aqueous solution (-0.1, -0.3 and -0.5 MPa); and four temperatures (20 ºC, 25 ºC, 30 ºC, and 35 ºC), respectively. A completely randomized experimental design was used, with four replications per treatment, and the ANOVA was performed individually for each combination of temperature and water potential of substrate. According to results obtained, the test of hydric restriction has the same efficiency of the accelerated aging test in estimating vigor of soybean seeds, cv. BRS 245RR, when water potentials of -0.1 MPa or -0.3 MPa at a temperature of 25 ºC, or -0.3 MPa at a temperature of 30 ºC are used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many engineering sectors are challenged by multi-objective optimization problems. Even if the idea behind these problems is simple and well established, the implementation of any procedure to solve them is not a trivial task. The use of evolutionary algorithms to find candidate solutions is widespread. Usually they supply a discrete picture of the non-dominated solutions, a Pareto set. Although it is very interesting to know the non-dominated solutions, an additional criterion is needed to select one solution to be deployed. To better support the design process, this paper presents a new method of solving non-linear multi-objective optimization problems by adding a control function that will guide the optimization process over the Pareto set that does not need to be found explicitly. The proposed methodology differs from the classical methods that combine the objective functions in a single scale, and is based on a unique run of non-linear single-objective optimizers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple and fast method for the determination of Ca, Cu, Fe, Mg, Mn, Se and Zn in bovine semen by quadrupole inductively coupled plasma spectrometry (q-ICP-MS) is described. Prior to analysis, samples (200 µL) were diluted 1:50 in a solution containing 0.01% v/v Triton® X-100 and 0.5% v/v nitric acid and directly analyzed by ICP-MS. The limits of detection of the method are 0.3, 0.03, 0.2, 0.04, 0.04, 0.03 and 0.03 µg L-1 for 44Ca, 63Cu, 57Fe, 24Mg, 64Zn, 82Se and 55Mn, respectively. For purposes of comparison and method validation, four ordinary bovine semen samples were directly analyzed by ICP-MS and by flame atomic absorption spectrometry (FAAS) or graphite furnace atomic absorption spectrometry (GF AAS), with no statistical difference between the techniques at the 95% level when applying the t-test. Then, the proposed method was applied in the determinations of Ca, Cu, Fe, Mg, Mn, Se and Zn in collected samples of bovine semen from different breeds, which are used in reproduction programs and artificial insemination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The importance of mechanical aspects related to cell activity and its environment is becoming more evident due to their influence in stem cell differentiation and in the development of diseases such as atherosclerosis. The mechanical tension homeostasis is related to normal tissue behavior and its lack may be related to the formation of cancer, which shows a higher mechanical tension. Due to the complexity of cellular activity, the application of simplified models may elucidate which factors are really essential and which have a marginal effect. The development of a systematic method to reconstruct the elements involved in the perception of mechanical aspects by the cell may accelerate substantially the validation of these models. This work proposes the development of a routine capable of reconstructing the topology of focal adhesions and the actomyosin portion of the cytoskeleton from the displacement field generated by the cell on a flexible substrate. Another way to think of this problem is to develop an algorithm to reconstruct the forces applied by the cell from the measurements of the substrate displacement, which would be characterized as an inverse problem. For these kind of problems, the Topology Optimization Method (TOM) is suitable to find a solution. TOM is consisted of an iterative application of an optimization method and an analysis method to obtain an optimal distribution of material in a fixed domain. One way to experimentally obtain the substrate displacement is through Traction Force Microscopy (TFM), which also provides the forces applied by the cell. Along with systematically generating the distributions of focal adhesion and actin-myosin for the validation of simplified models, the algorithm also represents a complementary and more phenomenological approach to TFM. As a first approximation, actin fibers and flexible substrate are represented through two-dimensional linear Finite Element Method. Actin contraction is modeled as an initial stress of the FEM elements. Focal adhesions connecting actin and substrate are represented by springs. The algorithm was applied to data obtained from experiments regarding cytoskeletal prestress and micropatterning, comparing the numerical results to the experimental ones

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CaSnO3 and SrSnO3 alkaline earth stannate thin films were prepared by chemical solution deposition using the polymeric precursor method on various single crystal substrates (R- and C-sapphire and 100-SrTiO3) at different temperatures. The films were characterized by X-ray diffraction (θ-2θ, ω- and φ-scans), field emission scanning electron microscopy, atomic force microscopy, micro-Raman spectroscopy and photoluminescence. Epitaxial SrSnO3 and CaSnO3 thin films were obtained on SrTiO3 with a high crystalline quality. The long-range symmetry promoted a short-range disorder which led to photoluminescence in the epitaxial films. In contrast, the films deposited on sapphire exhibited a random polycrystalline growth with no meaningful emission regardless of the substrate orientation. The network modifier (Ca or Sr) and the substrate (sapphire or SrTiO3) influenced the crystallization process and/or the microstructure. Higher is the tilts of the SnO6 octahedra, as in CaSnO3, higher is the crystallization temperature, which changed also the nucleation/grain growth process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Congresos y conferencias

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]This work presents a novel approach to solve a two dimensional problem by using an adaptive finite element approach. The most common strategy to deal with nested adaptivity is to generate a mesh that represents the geometry and the input parameters correctly, and to refine this mesh locally to obtain the most accurate solution. As opposed to this approach, the authors propose a technique using independent meshes : geometry, input data and the unknowns. Each particular mesh is obtained by a local nested refinement of the same coarse mesh at the parametric space…

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In my PhD thesis I propose a Bayesian nonparametric estimation method for structural econometric models where the functional parameter of interest describes the economic agent's behavior. The structural parameter is characterized as the solution of a functional equation, or by using more technical words, as the solution of an inverse problem that can be either ill-posed or well-posed. From a Bayesian point of view, the parameter of interest is a random function and the solution to the inference problem is the posterior distribution of this parameter. A regular version of the posterior distribution in functional spaces is characterized. However, the infinite dimension of the considered spaces causes a problem of non continuity of the solution and then a problem of inconsistency, from a frequentist point of view, of the posterior distribution (i.e. problem of ill-posedness). The contribution of this essay is to propose new methods to deal with this problem of ill-posedness. The first one consists in adopting a Tikhonov regularization scheme in the construction of the posterior distribution so that I end up with a new object that I call regularized posterior distribution and that I guess it is solution of the inverse problem. The second approach consists in specifying a prior distribution on the parameter of interest of the g-prior type. Then, I detect a class of models for which the prior distribution is able to correct for the ill-posedness also in infinite dimensional problems. I study asymptotic properties of these proposed solutions and I prove that, under some regularity condition satisfied by the true value of the parameter of interest, they are consistent in a "frequentist" sense. Once I have set the general theory, I apply my bayesian nonparametric methodology to different estimation problems. First, I apply this estimator to deconvolution and to hazard rate, density and regression estimation. Then, I consider the estimation of an Instrumental Regression that is useful in micro-econometrics when we have to deal with problems of endogeneity. Finally, I develop an application in finance: I get the bayesian estimator for the equilibrium asset pricing functional by using the Euler equation defined in the Lucas'(1978) tree-type models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In dieser Arbeit aus dem Bereich der Wenig-Nukleonen-Physik wird die neu entwickelte Methode der Lorentz Integral Transformation (LIT) auf die Untersuchung von Kernphotoabsorption und Elektronenstreuung an leichten Kernen angewendet. Die LIT-Methode ermoeglicht exakte Rechnungen durchzufuehren, ohne explizite Bestimmung der Endzustaende im Kontinuum. Das Problem wird auf die Loesung einer bindungzustandsaehnlichen Gleichung reduziert, bei der die Endzustandswechselwirkung vollstaendig beruecksichtigt wird. Die Loesung der LIT-Gleichung wird mit Hilfe einer Entwicklung nach hypersphaerischen harmonischen Funktionen durchgefuehrt, deren Konvergenz durch Anwendung einer effektiven Wechselwirkung im Rahmem des hypersphaerischen Formalismus (EIHH) beschleunigt wird. In dieser Arbeit wird die erste mikroskopische Berechnung des totalen Wirkungsquerschnittes fuer Photoabsorption unterhalb der Pionproduktionsschwelle an 6Li, 6He und 7Li vorgestellt. Die Rechnungen werden mit zentralen semirealistischen NN-Wechselwirkungen durchgefuehrt, die die Tensor Kraft teilweise simulieren, da die Bindungsenergien von Deuteron und von Drei-Teilchen-Kernen richtig reproduziert werden. Der Wirkungsquerschnitt fur Photoabsorption an 6Li zeigt nur eine Dipol-Riesenresonanz, waehrend 6He zwei unterschiedliche Piks aufweist, die dem Aufbruch vom Halo und vom Alpha-Core entsprechen. Der Vergleich mit experimentellen Daten zeigt, dass die Addition einer P-Wellen-Wechselwirkung die Uebereinstimmung wesentlich verbessert. Bei 7Li wird nur eine Dipol-Riesenresonanz gefunden, die gut mit den verfuegbaren experimentellen Daten uebereinstimmt. Bezueglich der Elektronenstreuung wird die Berechnung der longitudinalen und transversalen Antwortfunktionen von 4He im quasi-elastischen Bereich fuer mittlere Werte des Impulsuebertrages dargestellt. Fuer die Ladungs- und Stromoperatoren wird ein nichtrelativistisches Modell verwendet. Die Rechnungen sind mit semirealistischen Wechselwirkungen durchgefuert und ein eichinvarianter Strom wird durch die Einfuehrung eines Mesonaustauschstroms gewonnen. Die Wirkung des Zweiteilchenstroms auf die transversalen Antwortfunktionen wird untersucht. Vorlaeufige Ergebnisse werden gezeigt und mit den verfuegbaren experimentellen Daten verglichen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis starts showing the main characteristics and application fields of the AlGaN/GaN HEMT technology, focusing on reliability aspects essentially due to the presence of low frequency dispersive phenomena which limit in several ways the microwave performance of this kind of devices. Based on an equivalent voltage approach, a new low frequency device model is presented where the dynamic nonlinearity of the trapping effect is taken into account for the first time allowing considerable improvements in the prediction of very important quantities for the design of power amplifier such as power added efficiency, dissipated power and internal device temperature. An innovative and low-cost measurement setup for the characterization of the device under low-frequency large-amplitude sinusoidal excitation is also presented. This setup allows the identification of the new low frequency model through suitable procedures explained in detail. In this thesis a new non-invasive empirical method for compact electrothermal modeling and thermal resistance extraction is also described. The new contribution of the proposed approach concerns the non linear dependence of the channel temperature on the dissipated power. This is very important for GaN devices since they are capable of operating at relatively high temperatures with high power densities and the dependence of the thermal resistance on the temperature is quite relevant. Finally a novel method for the device thermal simulation is investigated: based on the analytical solution of the tree-dimensional heat equation, a Visual Basic program has been developed to estimate, in real time, the temperature distribution on the hottest surface of planar multilayer structures. The developed solver is particularly useful for peak temperature estimation at the design stage when critical decisions about circuit design and packaging have to be made. It facilitates the layout optimization and reliability improvement, allowing the correct choice of the device geometry and configuration to achieve the best possible thermal performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within this PhD thesis several methods were developed and validated which can find applicationare suitable for environmental sample and material science and should be applicable for monitoring of particular radionuclides and the analysis of the chemical composition of construction materials in the frame of ESS project. The study demonstrated that ICP-MS is a powerful analytical technique for ultrasensitive determination of 129I, 90Sr and lanthanides in both artificial and environmental samples such as water and soil. In particular ICP-MS with collision cell allows measuring extremely low isotope ratios of iodine. It was demonstrated that isotope ratios of 129I/127I as low as 10-7 can be measured with an accuracy and precision suitable for distinguishing sample origins. ICP-MS with collision cell, in particular in combination with cool plasma conditions, reduces the influence of isobaric interferences on m/z = 90 and is therefore well-suited for 90Sr analysis in water samples. However, the applied ICP-CC-QMS in this work is limited for the measurement of 90Sr due to the tailing of 88Sr+ and in particular Daly detector noise. Hyphenation of capillary electrophoresis with ICP-MS was shown to resolve atomic ions of all lanthanides and polyatomic interferences. The elimination of polyatomic and isobaric ICP-MS interferences was accomplished without compromising the sensitivity by the use of a high resolution mode as available on ICP-SFMS. Combination of laser ablation with ICP-MS allowed direct micro and local uranium isotope ratio measurements at the ultratrace concentrations on the surface of biological samples. In particular, the application of a cooled laser ablation chamber improves the precision and accuracy of uranium isotopic ratios measurements in comparison to the non-cooled laser ablation chamber by up to one order of magnitude. In order to reduce the quantification problem, a mono gas on-line solution-based calibration was built based on the insertion of a microflow nebulizer DS-5 directly into the laser ablation chamber. A micro local method to determine the lateral element distribution on NiCrAlY-based alloy and coating after oxidation in air was tested and validated. Calibration procedures involving external calibration, quantification by relative sensitivity coefficients (RSCs) and solution-based calibration were investigated. The analytical method was validated by comparison of the LA-ICP-MS results with data acquired by EDX.