875 resultados para Linear Multi-step Formulae


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The selection of a set of requirements between all the requirements previously defined by customers is an important process, repeated at the beginning of each development step when an incremental or agile software development approach is adopted. The set of selected requirements will be developed during the actual iteration. This selection problem can be reformulated as a search problem, allowing its treatment with metaheuristic optimization techniques. This paper studies how to apply Ant Colony Optimization algorithms to select requirements. First, we describe this problem formally extending an earlier version of the problem, and introduce a method based on Ant Colony System to find a variety of efficient solutions. The performance achieved by the Ant Colony System is compared with that of Greedy Randomized Adaptive Search Procedure and Non-dominated Sorting Genetic Algorithm, by means of computational experiments carried out on two instances of the problem constructed from data provided by the experts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the construction and structuring of a technological niche – i.e. a protected space where promising but still underperforming technologies are stabilized and articulated with societal needs – and discusses the processes that influence niche development and may enable niche breakout. In theoretical terms the paper is grounded on the multi-level approach to sustainability transitions, and particularly on the niche literature. But it also attempts to address the limitations of this literature in what concerns the spatial dimension of niche development. It is argued that technological niches can transcend the narrow territorial boundaries to which they are often confined, and encompass communities and actions that span several spatial levels, without losing some territorial embeddedness. It is further proposed that these features shape the niche trajectory and, therefore, need to be explicitly considered by the niche theoretical framework. To address this problem the paper builds on and extends the socio-cognitive perspective to technology development, introducing a further dimension – space – which broadens the concept of technological niche and permits to better capture the complexity of niche behaviour. This extended framework is applied to the case of an emerging renewable energy technology – wave energy - which exhibits a particularly slow and non-linear development trajectory. The empirical analysis starts by examining how an “overall niche space” in wave energy was spatially constructed over time. Then it investigates in greater detail the niche development processes that took place in Portugal, a country that was among the pioneers in the field, and whose actors have been, from very early stages, engaged in the activities conducted at various spatial levels. Through this combined analysis, the paper seeks to understand whether and how niche development is shaped by processes taking place at different spatial levels. More specifically it investigates the interplay between territorial and relational elements in niche development, and how these different dynamics influence the performance of the niche processes and impact on the overall niche trajectory. The results confirm the niche multi-spatial dynamics, showing that it is shaped by the interplay between a niche relational space constructed by actors’ actions and interactions on/across levels, and the territorial effects introduced by these actors’ embeddedness in particular geographical and institutional settings. They contribute to a more precise understanding of the processes that can accelerate or slow down the trajectory of a technological niche. In addition, the results shed some light into the niche activities conducted in/originating from a specific territorial setting - Portugal - offering some insights into the behaviour of key actors and its implications for the positioning of the country in the emerging field, which can be relevant for the formulation of strategies and policies for this area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a detailed analysis of the application of a multi-scale Hierarchical Reconstruction method for solving a family of ill-posed linear inverse problems. When the observations on the unknown quantity of interest and the observation operators are known, these inverse problems are concerned with the recovery of the unknown from its observations. Although the observation operators we consider are linear, they are inevitably ill-posed in various ways. We recall in this context the classical Tikhonov regularization method with a stabilizing function which targets the specific ill-posedness from the observation operators and preserves desired features of the unknown. Having studied the mechanism of the Tikhonov regularization, we propose a multi-scale generalization to the Tikhonov regularization method, so-called the Hierarchical Reconstruction (HR) method. First introduction of the HR method can be traced back to the Hierarchical Decomposition method in Image Processing. The HR method successively extracts information from the previous hierarchical residual to the current hierarchical term at a finer hierarchical scale. As the sum of all the hierarchical terms, the hierarchical sum from the HR method provides an reasonable approximate solution to the unknown, when the observation matrix satisfies certain conditions with specific stabilizing functions. When compared to the Tikhonov regularization method on solving the same inverse problems, the HR method is shown to be able to decrease the total number of iterations, reduce the approximation error, and offer self control of the approximation distance between the hierarchical sum and the unknown, thanks to using a ladder of finitely many hierarchical scales. We report numerical experiments supporting our claims on these advantages the HR method has over the Tikhonov regularization method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The municipal management in any country of the globe requires planning and allocation of resources evenly. In Brazil, the Law of Budgetary Guidelines (LDO) guides municipal managers toward that balance. This research develops a model that seeks to find the balance of the allocation of public resources in Brazilian municipalities, considering the LDO as a parameter. For this using statistical techniques and multicriteria analysis as a first step in order to define allocation strategies, based on the technical aspects arising from the municipal manager. In a second step, presented in linear programming based optimization where the objective function is derived from the preference of the results of the manager and his staff. The statistical representation is presented to support multicriteria development in the definition of replacement rates through time series. The multicriteria analysis was structured by defining the criteria, alternatives and the application of UTASTAR methods to calculate replacement rates. After these initial settings, an application of linear programming was developed to find the optimal allocation of enforcement resources of the municipal budget. Data from the budget of a municipality in southwestern Paraná were studied in the application of the model and analysis of results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Among other causes the long-term result of hip prostheses in dogs is determined by aseptic loosening. A prevention of prosthesis complications can be achieved by an optimization of the tribological system which finally results in improved implant duration. In this context a computerized model for the calculation of hip joint loadings during different motions would be of benefit. In a first step in the development of such an inverse dynamic multi-body simulation (MBS-) model we here present the setup of a canine hind limb model applicable for the calculation of ground reaction forces. Methods: The anatomical geometries of the MBS-model have been established using computer tomography- (CT-) and magnetic resonance imaging- (MRI-) data. The CT-data were collected from the pelvis, femora, tibiae and pads of a mixed-breed adult dog. Geometric information about 22 muscles of the pelvic extremity of 4 mixed-breed adult dogs was determined using MRI. Kinematic and kinetic data obtained by motion analysis of a clinically healthy dog during a gait cycle (1 m/s) on an instrumented treadmill were used to drive the model in the multi-body simulation. Results and Discussion: As a result the vertical ground reaction forces (z-direction) calculated by the MBS-system show a maximum deviation of 1.75%BW for the left and 4.65%BW for the right hind limb from the treadmill measurements. The calculated peak ground reaction forces in z- and y-direction were found to be comparable to the treadmill measurements, whereas the curve characteristics of the forces in y-direction were not in complete alignment. Conclusion: In conclusion, it could be demonstrated that the developed MBS-model is suitable for simulating ground reaction forces of dogs during walking. In forthcoming investigations the model will be developed further for the calculation of forces and moments acting on the hip joint during different movements, which can be of help in context with the in silico development and testing of hip prostheses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The textile industry generates a large volume of high organic effluent loading whoseintense color arises from residual dyes. Due to the environmental implications caused by this category of contaminant there is a permanent search for methods to remove these compounds from industrial waste waters. The adsorption alternative is one of the most efficient ways for such a purpose of sequestering/remediation and the use of inexpensive materials such as agricultural residues (e.g., sugarcane bagasse) and cotton dust waste (CDW) from weaving in their natural or chemically modified forms. The inclusion of quaternary amino groups (DEAE+) and methylcarboxylic (CM-) in the CDW cellulosic structure generates an ion exchange capacity in these formerly inert matrix and, consequently, consolidates its ability for electrovalent adsorption of residual textile dyes. The obtained ionic matrices were evaluated for pHpcz, the retention efficiency for various textile dyes in different experimental conditions, such as initial concentration , temperature, contact time in order to determine the kinetic and thermodynamic parameters of adsorption in batch, turning comprehensive how does occur the process, then understood from the respective isotherms. It was observed a change in the pHpcz for CM--CDW (6.07) and DEAE+-CDW (9.66) as compared to the native CDW (6.46), confirming changes in the total surface charge. The ionized matrices were effective for removing all evaluated pure or residual textile dyes under various tested experimental conditions. The kinetics of the adsorption process data had best fitted to the model a pseudosecond order and an intraparticle diffusion model suggested that the process takes place in more than one step. The time required for the system to reach equilibrium varied according to the initial concentration of dye, being faster in diluted solutions. The isotherm model of Langmuir was the best fit to the experimental data. The maximum adsorption capacity varied differently for each tested dye and it is closely related to the interaction adsorbent/adsorbate and dye chemical structure. Few dyes obtained a linear variation of the balance ka constant due to the inversion of temperature and might have influence form their thermodynamic behavior. Dyes that could be evaluated such as BR 18: 1 and AzL, showed features of an endothermic adsorption process (ΔH° positive) and the dye VmL presented exothermic process characteristics (ΔH° negative). ΔG° values suggested that adsorption occurred spontaneously, except for the BY 28 dye, and the values of ΔH° indicated that adsorption occurred by a chemisorption process. The reduction of 31 to 51% in the biodegradability of the matrix after the dye adsorption means that they must go through a cleaning process before being discarded or recycled, and the regeneration test indicates that matrices can be reused up to five times without loss of performance. The DEAE+-CDW matrix was efficient for the removal of color from a real textile effluent reaching an UV-Visible spectral area decrease of 93% when applied in a proportion of 15 g ion exchanger matrix L-1 of colored wastewater, even in the case of the parallel presence of 50 g L-1 of mordant salts in the waste water. The wide range of colored matter removal by the synthesized matrices varied from 40.27 to 98.65 mg g-1 of ionized matrix, obviously depending in each particular chemical structure of the dye upon adsorption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low temperature is one of the main environmental constraints for rice ( Oryza sativa L.) grain production yield. It is known that multi-environment studies play a critical role in the sustainability of rice production across diverse environments. However, there are few studies based on multi-environment studies of rice in temperate climates. The aim was to study the performance of rice plants in cold environments. Four experimental lines and six cultivars were evaluated at three locations during three seasons. The grain yield data were analyzed with ANOVA, mixed models based on the best linear unbiased predictors (BLUPs), and genotype plus Genotype × Environment interaction (GGE) biplot. High genotype contribution (> 25%) was observed in grain yield and the interaction between genotype and locations was not very important. Results also showed that ‘Quila 241319’ was the best experimental line with the highest grain yield (11.3 t ha-1) and grain yield stability across the environments; commercial cultivars were classified as medium grain yield genotypes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this contribution, a system identification procedure of a two-input Wiener model suitable for the analysis of the disturbance behavior of integrated nonlinear circuits is presented. The identified block model is comprised of two linear dynamic and one static nonlinear block, which are determined using an parameterized approach. In order to characterize the linear blocks, an correlation analysis using a white noise input in combination with a model reduction scheme is adopted. After having characterized the linear blocks, from the output spectrum under single tone excitation at each input a linear set of equations will be set up, whose solution gives the coefficients of the nonlinear block. By this data based black box approach, the distortion behavior of a nonlinear circuit under the influence of an interfering signal at an arbitrary input port can be determined. Such an interfering signal can be, for example, an electromagnetic interference signal which conductively couples into the port of consideration. © 2011 Author(s).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to provide a comprehensive study of some linear non-local diffusion problems in metric measure spaces. These include, for example, open subsets in ℝN, graphs, manifolds, multi-structures and some fractal sets. For this, we study regularity, compactness, positivity and the spectrum of the stationary non-local operator. We then study the solutions of linear evolution non-local diffusion problems, with emphasis on similarities and differences with the standard heat equation in smooth domains. In particular, we prove weak and strong maximum principles and describe the asymptotic behaviour using spectral methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a positive, accurate moment closure for linear kinetic transport equations based on a filtered spherical harmonic (FP_N) expansion in the angular variable. The FP_N moment equations are accurate approximations to linear kinetic equations, but they are known to suffer from the occurrence of unphysical, negative particle concentrations. The new positive filtered P_N (FP_N+) closure is developed to address this issue. The FP_N+ closure approximates the kinetic distribution by a spherical harmonic expansion that is non-negative on a finite, predetermined set of quadrature points. With an appropriate numerical PDE solver, the FP_N+ closure generates particle concentrations that are guaranteed to be non-negative. Under an additional, mild regularity assumption, we prove that as the moment order tends to infinity, the FP_N+ approximation converges, in the L2 sense, at the same rate as the FP_N approximation; numerical tests suggest that this assumption may not be necessary. By numerical experiments on the challenging line source benchmark problem, we confirm that the FP_N+ method indeed produces accurate and non-negative solutions. To apply the FP_N+ closure on problems at large temporal-spatial scales, we develop a positive asymptotic preserving (AP) numerical PDE solver. We prove that the propose AP scheme maintains stability and accuracy with standard mesh sizes at large temporal-spatial scales, while, for generic numerical schemes, excessive refinements on temporal-spatial meshes are required. We also show that the proposed scheme preserves positivity of the particle concentration, under some time step restriction. Numerical results confirm that the proposed AP scheme is capable for solving linear transport equations at large temporal-spatial scales, for which a generic scheme could fail. Constrained optimization problems are involved in the formulation of the FP_N+ closure to enforce non-negativity of the FP_N+ approximation on the set of quadrature points. These optimization problems can be written as strictly convex quadratic programs (CQPs) with a large number of inequality constraints. To efficiently solve the CQPs, we propose a constraint-reduced variant of a Mehrotra-predictor-corrector algorithm, with a novel constraint selection rule. We prove that, under appropriate assumptions, the proposed optimization algorithm converges globally to the solution at a locally q-quadratic rate. We test the algorithm on randomly generated problems, and the numerical results indicate that the combination of the proposed algorithm and the constraint selection rule outperforms other compared constraint-reduced algorithms, especially for problems with many more inequality constraints than variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With its powerful search engines and billions of published pages, the Worldwide Web has become the ultimate tool to explore the human experience. But, despite the advent of the digital revolution, e-books, at their core, have remained remarkably similar to their printed siblings. This has resulted in a clear dichotomy between two ways of reading: on one side, the multi-dimensional world of the Web; on the other, the linearity of books and e-books. My investigation of the literature indicates that the focus of attempts to merge these two modes of production, and hence of reading, has been the insertion of interactivity into fiction. As I will show in the Literature Review, a clear thrust of research since the early 1990s, and in my opinion the most significant, has concentrated on presenting the reader with choices that affect the plot. This has resulted in interactive stories in which the structure of the narrative can be altered by the reader of experimental fiction. The interest in this area of research is not surprising, as the interaction of readers with the fabric of the narrative provides a fertile ground for exploring, analysing, and discussing issues of plot consistency and continuity. I found in the literature several papers concerned with the effects of hyperlinking on literature, but none about how hyperlinked material and narrative could be integrated without compromising the narrative flow as designed by the author. It led me to think that the researchers had accepted hypertextuality and the linear organisation of fiction as being antithetical, thereby ignoring the possibility of exploiting the first while preserving the second. All the works I consulted were focussed on exploring the possibilities provided to authors (and readers) by hypertext or how hypertext literature affects literary criticism. This was true in earlier works by Landow and Harpold and remained true in later works by Bolter and Grusin. To quote another example, in his book Hypertext 3.0, Landow states: “Most who have speculated on the relation between hypertextuality and fiction concentrate [...] on the effects it will have on linear narrative”, and “hypertext opens major questions about story and plot by apparently doing away with linear organization” (Landow, 2006, pp. 220, 221). In other words, the authors have added narrative elements to Web pages, effectively placing their stories in a subordinate role. By focussing on “opening up” the plots, the researchers have missed the opportunity to maintain the integrity of their stories and use hyperlinked information to provide interactive access to backstory and factual bases. This would represent a missing link between the traditional way of reading, in which the readers have no influence on the path the author has laid out for them, and interactive narrative, in which the readers choose their way across alternatives, thereby, at least to a certain extent, creating their own path. It would be, to continue the metaphor, as if the readers could follow the main path created by the author while being able to get “sidetracked” into exploring hyperlinked material. In Hypertext 3.0, Landow refers to an “Axial structure [of hypertext] characteristic of electronic books and scholarly books with foot-and endnotes” versus a “Network structure of hypertext” (Landow, 2006, p. 70). My research aims at generalising the axial structure and extending it to fiction without losing the linearity at its core. In creative nonfiction, the introduction of places, scenes, and settings, together with characterisation, brings to life the facts without altering them; while much fiction draws on facts to provide a foundation, or narrative elements, for the work. But how can the reader distinguish between facts and representations? For example, to what extent do dialogues and perceptions present what was actually said and thought? Some authors of creative nonfiction use end-notes to provide comments and citations while minimising disruption the flow of the main text, but they are limited in scope and constrained in space. Each reader should be able to enjoy the narrative as if it were a novel but also to explore the facts at the level of detail s/he needs. For this to be possible, end-notes should provide a Web-like way of exploring in more detail what the author has already researched. My research aims to develop ways of integrating narrative prose and hyperlinked documents into a Hyperbook. Its goal is to create a new writing paradigm in which a story incorporates a gateway to detailed information. While creative nonfiction uses the techniques of fictional writing to provide reportage of actual events and fact-based fiction illuminates the affectual dimensions of what happened (e.g., Kate Grenville’s The Secret River and Hilary Mantel’s Wolf Hall), Hyperbooks go one step further and link narrative prose to the details of the events on which the narrative is based or, more in general, to information the reader might find of interest. My dissertation introduces and utilises Hyperbooks to engage in two parallel types of investigation Build knowledge about Italian WWII POWs held in Australia and present it as part of a novella in Hyperbook format. Develop a new piece of technology capable of extending the writing and reading process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first goal of this study is to analyse a real-world multiproduct onshore pipeline system in order to verify its hydraulic configuration and operational feasibility by constructing a simulation model step by step from its elementary building blocks that permits to copy the operation of the real system as precisely as possible. The second goal is to develop this simulation model into a user-friendly tool that one could use to find an “optimal” or “best” product batch schedule for a one year time period. Such a batch schedule could change dynamically as perturbations occur during operation that influence the behaviour of the entire system. The result of the simulation, the ‘best’ batch schedule is the one that minimizes the operational costs in the system. The costs involved in the simulation are inventory costs, interface costs, pumping costs, and penalty costs assigned to any unforeseen situations. The key factor to determine the performance of the simulation model is the way time is represented. In our model an event based discrete time representation is selected as most appropriate for our purposes. This means that the time horizon is divided into intervals of unequal lengths based on events that change the state of the system. These events are the arrival/departure of the tanker ships, the openings and closures of loading/unloading valves of storage tanks at both terminals, and the arrivals/departures of trains/trucks at the Delivery Terminal. In the feasibility study we analyse the system’s operational performance with different Head Terminal storage capacity configurations. For these alternative configurations we evaluated the effect of different tanker ship delay magnitudes on the number of critical events and product interfaces generated, on the duration of pipeline stoppages, the satisfaction of the product demand and on the operative costs. Based on the results and the bottlenecks identified, we propose modifications in the original setup.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.