900 resultados para other numerical approaches


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research undertaken here was in response to a decision by a major food producer in about 2009 to consider establishing processing tomato production in northern Australia. This was in response to a lack of water availability in the Goulburn Valley region following the extensive drought that continued until 2011. The high price of water and the uncertainty that went with it was important in making the decision to look at sites within Queensland. This presented an opportunity to develop a tomato production model for the varieties used in the processing industry and to use this as a case study along with rice and cotton production. Following some unsuccessful early trials and difficulties associated with the Global Financial Crisis, large scale studies by the food producer were abandoned. This report uses the data that was collected prior to this decision and contrasts the use of crop modelling with simpler climatic analyses that can be undertaken to investigate the impact of climate change on production systems. Crop modelling can make a significant contribution to our understanding of the impacts of climate variability and climate change because it harnesses the detailed understanding of physiology of the crop in a way that statistical or other analytical approaches cannot do. There is a high overhead, but given that trials are being conducted for a wide range of crops for a variety of purposes, breeding, fertiliser trials etc., it would appear to be profitable to link researchers with modelling expertise with those undertaking field trials. There are few more cost-effective approaches than modelling that can provide a pathway to understanding future climates and their impact on food production.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hyper-redundant robots are characterized by the presence of a large number of actuated joints, many more than the number required to perform a given task. These robots have been proposed and used for many applications involving avoiding obstacles or, in general, to provide enhanced dexterity in performing tasks. Making effective use of the extra degrees of freedom or resolution of redundancy has been an extensive topic of research and several methods have been proposed in literature. In this paper, we compare three known methods and show that an algorithm based on a classical curve called the tractrix leads to a more 'natural' motion of the hyper-redundant robot, with the displacements diminishing from the end-effector to the fixed base. In addition, since the actuators nearer the base 'see' a greater inertia due to the links farther away, smaller motion of the actuators nearer the base results in better motion of the end-effector as compared to other two approaches. We present simulation and experimental results performed on a prototype eight link planar hyper-redundant manipulator.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Even research models of helicopter dynamics often lead to a large number of equations of motion with periodic coefficients; and Floquet theory is a widely used mathematical tool for dynamic analysis. Presently, three approaches are used in generating the equations of motion. These are (1) general-purpose symbolic processors such as REDUCE and MACSYMA, (2) a special-purpose symbolic processor, DEHIM (Dynamic Equations for Helicopter Interpretive Models), and (3) completely numerical approaches. In this paper, comparative aspects of the first two purely algebraic approaches are studied by applying REDUCE and DEHIM to the same set of problems. These problems range from a linear model with one degree of freedom to a mildly non-linear multi-bladed rotor model with several degrees of freedom. Further, computational issues in applying Floquet theory are also studied, which refer to (1) the equilibrium solution for periodic forced response together with the transition matrix for perturbations about that response and (2) a small number of eigenvalues and eigenvectors of the unsymmetric transition matrix. The study showed the following: (1) compared to REDUCE, DEHIM is far more portable and economical, but it is also less user-friendly, particularly during learning phases; (2) the problems of finding the periodic response and eigenvalues are well conditioned.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The goal of optimization in vehicle design is often blurred by the myriads of requirements belonging to attributes that may not be quite related. If solutions are sought by optimizing attribute performance-related objectives separately starting with a common baseline design configuration as in a traditional design environment, it becomes an arduous task to integrate the potentially conflicting solutions into one satisfactory design. It may be thus more desirable to carry out a combined multi-disciplinary design optimization (MDO) with vehicle weight as an objective function and cross-functional attribute performance targets as constraints. For the particular case of vehicle body structure design, the initial design is likely to be arrived at taking into account styling, packaging and market-driven requirements. The problem with performing a combined cross-functional optimization is the time associated with running such CAE algorithms that can provide a single optimal solution for heterogeneous areas such as NVH and crash safety. In the present paper, a practical MDO methodology is suggested that can be applied to weight optimization of automotive body structures by specifying constraints on frequency and crash performance. Because of the reduced number of cases to be analyzed for crash safety in comparison with other MDO approaches, the present methodology can generate a single size-optimized solution without having to take recourse to empirical techniques such as response surface-based prediction of crash performance and associated successive response surface updating for convergence. An example of weight optimization of spaceframe-based BIW of an aluminum-intensive vehicle is given to illustrate the steps involved in the current optimization process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We show that the upper bound for the central magnetic field of a super-Chandrasekhar white dwarf calculated by Nityananda and Konar Phys. Rev. D 89, 103017 (2014)] and in the concerned comment, by the same authors, against our work U. Das and B. Mukhopadhyay, Phys. Rev. D 86, 042001 (2012)] is erroneous. This in turn strengthens the argument in favor of the stability of the recently proposed magnetized super-Chandrasekhar white dwarfs. We also point out several other numerical errors in their work. Overall we conclude that the arguments put forth by Nityananda and Konar are misleading.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Turbidity measurement for the absolute coagulation rate constant of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor to derive the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed in the aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion about the physical insight of using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the calculated data of the optical factor by the T-matrix method for a range of particle radii and incident light wavelengths are listed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El presente trabajo es una revisión de la teoría de la Agenda Setting. Se analiza el contexto teórico de su nacimiento, sus antecedentes y su evolución en los estudios de la relación entre medios de comunicación y opinión pública. Asimismo, se describe una serie de casos en los que se ha estudiado la función de establecimiento de la agenda, tanto en el extranjero como en Argentina. Finalmente, se esboza una propuesta de complementación de esta perspectiva con otros abordajes teóricos, con vistas a lograr una mirada más integral, que entienda a los productores de la información como actores insertos en una comunidad, cuyos valores expresan y redefinen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An infinite elastic solid containing a doubly periodic parallelogrammic array of cylindrical inclusions under longitudinal shear is studied. A rigorous and effective analytical method for exact solution is developed by using Eshelby's equivalent inclusion concept integrated with the new results from the doubly quasi-periodic Riemann boundary value problems. Numerical results show the dependence of the stress concentrations in such heterogeneous materials on the periodic microstructure parameters. The overall longitudinal shear modulus of composites with periodic distributed fibers is also studied. Several problems of practical importance, such as those of doubly periodic holes or rigid inclusions, singly periodic inclusions and single inclusion, are solved or resolved as special cases. The present method can provide benchmark results for other numerical and approximate methods. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A numerical study of turbulent flow in a straight duct of square cross-section is made. An order-of-magnitude analysis of the 3-D, time-averaged Navier-Stokes equations resulted in a parabolic form of the Navier-Stokes equations. The governing equations, expressed in terms of a new vector-potential formulation, are expanded as a multi-deck structure with each deck characterized by its dominant physical forces. The resulting equations are solved using a finite-element approach with a bicubic element representation on each cross-sectional plane. The numerical integration along the streamwise direction is carried out with finite-difference approximations until a fully-developed state is reached. The computed results agree well with other numerical studies and compare very favorably with the available experimental data. One important outcome of the current investigation is the interpretation analytically that the driving force of the secondary flow in a square duct comes mainly from the second-order terms of the difference in the gradients of the normal and transverse Reynolds stresses in the axial vorticity equation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many sources of information that discuss currents problems of food security point to the importance of farmed fish as an ideal food source that can be grown by poor farmers, (Asian Development Bank 2004). Furthermore, the development of improved strains of fish suitable for low-input aquaculture such as Tilapia, has demonstrated the feasibility of an approach that combines “cutting edge science” with accessible technology, as a means for improving the nutrition and livelihoods of both the urban poor and poor farmers in developing countries (Mair et al. 2002). However, the use of improved strains of fish as a means of reducing hunger and improving livelihoods has proved to be difficult to sustain, especially as a public good, when external (development) funding sources devoted to this area are minimal1. In addition, the more complicated problem of delivery of an aquaculture system, not just improved fish strains and the technology, can present difficulties and may go explicitly unrecognized (from Sissel Rogne, as cited by Silje Rem 2002). Thus, the involvement of private partners has featured prominently in the strategy for transferring to the public technology related to improved Tilapia strains. Partnering with the private sector in delivery schemes to the poor should take into account both the public goods aspect and the requirement that the traits selected for breeding “improved” strains meet the actual needs of the resource poor farmer. Other dissemination approaches involving the public sector may require a large investment in capacity building. However, the use of public sector institutions as delivery agents encourages the maintaining of the “public good” nature of the products.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the major concerns in an Intelligent Transportation System (ITS) scenario, such as that which may be found on a long-distance train service, is the provision of efficient communication services, satisfying users' expectations, and fulfilling even highly demanding application requirements, such as safety-oriented services. In an ITS scenario, it is common to have a significant amount of onboard devices that comprise a cluster of nodes (a mobile network) that demand connectivity to the outside networks. This demand has to be satisfied without service disruption. Consequently, the mobility of the mobile network has to be managed. Due to the nature of mobile networks, efficient and lightweight protocols are desired in the ITS context to ensure adequate service performance. However, the security is also a key factor in this scenario. Since the management of the mobility is essential for providing communications, the protocol for managing this mobility has to be protected. Furthermore, there are safety-oriented services in this scenario, so user application data should also be protected. Nevertheless, providing security is expensive in terms of efficiency. Based on this considerations, we have developed a solution for managing the network mobility for ITS scenarios: the NeMHIP protocol. This approach provides a secure management of network mobility in an efficient manner. In this article, we present this protocol and the strategy developed to maintain its security and efficiency in satisfactory levels. We also present the developed analytical models to analyze quantitatively the efficiency of the protocol. More specifically, we have developed models for assessing it in terms of signaling cost, which demonstrates that NeMHIP generates up to 73.47% less signaling compared to other relevant approaches. Therefore, the results obtained demonstrate that NeMHIP is the most efficient and secure solution for providing communications in mobile network scenarios such as in an ITS context.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper takes a new look at an old question: what is the human self? It offers a proposal for theorizing the self from an enactive perspective as an autonomous system that is constituted through interpersonal relations. It addresses a prevalent issue in the philosophy of cognitive science: the body-social problem. Embodied and social approaches to cognitive identity are in mutual tension. On the one hand, embodied cognitive science risks a new form of methodological individualism, implying a dichotomy not between the outside world of objects and the brain-bound individual but rather between body-bound individuals and the outside social world. On the other hand, approaches that emphasize the constitutive relevance of social interaction processes for cognitive identity run the risk of losing the individual in the interaction dynamics and of downplaying the role of embodiment. This paper adopts a middle way and outlines an enactive approach to individuation that is neither individualistic nor disembodied but integrates both approaches. Elaborating on Jonas' notion of needful freedom it outlines an enactive proposal to understanding the self as co-generated in interactions and relations with others. I argue that the human self is a social existence that is organized in terms of a back and forth between social distinction and participation processes. On this view, the body, rather than being identical with the social self, becomes its mediator

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho objetiva a construção de estruturas robustas e computacionalmente eficientes para a solução do problema de deposição de parafinas do ponto de vista do equilíbrio sólido-líquido. São avaliados diversos modelos termodinâmicos para a fase líquida: equação de estado de Peng-Robinson e os modelos de coeficiente de atividade de Solução Ideal, Wilson, UNIQUAC e UNIFAC. A fase sólida é caracterizada pelo modelo Multisólido. A previsão de formação de fase sólida é inicialmente prevista por um teste de estabilidade termodinâmica. Posteriormente, o sistema de equações não lineares que caracteriza o equilíbrio termodinâmico e as equações de balanço material é resolvido por três abordagens numéricas: método de Newton multivariável, método de Broyden e método Newton-Armijo. Diversos experimentos numéricos foram conduzidos de modo a avaliar os tempos de computação e a robustez frente a diversos cenários de estimativas iniciais dos métodos numéricos para os diferentes modelos e diferentes misturas. Os resultados indicam para a possibilidade de construção de arcabouços computacionais eficientes e robustos, que podem ser empregados acoplados a simuladores de escoamento em dutos, por exemplo.