968 resultados para Nonsmooth Calculus


Relevância:

10.00% 10.00%

Publicador:

Resumo:

O trabalho apresentado centra-se na determinação dos custos de construção de condutas de pequenos e médios diâmetros em Polietileno de Alta Densidade (PEAD) para saneamento básico, tendo como base a metodologia descrita no livro Custos de Construção e Exploração – Volume 9 da série Gestão de Sistemas de Saneamento Básico, de Lencastre et al. (1994). Esta metodologia descrita no livro já referenciado, nos procedimentos de gestão de obra, e para tal foram estimados custos unitários de diversos conjuntos de trabalhos. Conforme Lencastre et al (1994), “esses conjuntos são referentes a movimentos de terras, tubagens, acessórios e respetivos órgãos de manobra, pavimentações e estaleiro, estando englobado na parte do estaleiro trabalhos acessórios correspondentes à obra.” Os custos foram obtidos analisando vários orçamentos de obras de saneamento, resultantes de concursos públicos de empreitadas recentemente realizados. Com vista a tornar a utilização desta metodologia numa ferramenta eficaz, foram organizadas folhas de cálculo que possibilitam obter estimativas realistas dos custos de execução de determinada obra em fases anteriores ao desenvolvimento do projeto, designadamente numa fase de preparação do plano diretor de um sistema ou numa fase de elaboração de estudos de viabilidade económico-financeiros, isto é, mesmo antes de existir qualquer pré-dimensionamento dos elementos do sistema. Outra técnica implementada para avaliar os dados de entrada foi a “Análise Robusta de Dados”, Pestana (1992). Esta metodologia permitiu analisar os dados mais detalhadamente antes de se formularem hipóteses para desenvolverem a análise de risco. A ideia principal é o exame bastante flexível dos dados, frequentemente antes mesmo de os comparar a um modelo probabilístico. Assim, e para um largo conjunto de dados, esta técnica possibilitou analisar a disparidade dos valores encontrados para os diversos trabalhos referenciados anteriormente. Com os dados recolhidos, e após o seu tratamento, passou-se à aplicação de uma metodologia de Análise de Risco, através da Simulação de Monte Carlo. Esta análise de risco é feita com recurso a uma ferramenta informática da Palisade, o @Risk, disponível no Departamento de Engenharia Civil. Esta técnica de análise quantitativa de risco permite traduzir a incerteza dos dados de entrada, representada através de distribuições probabilísticas que o software disponibiliza. Assim, para por em prática esta metodologia, recorreu-se às folhas de cálculo que foram realizadas seguindo a abordagem proposta em Lencastre et al (1994). A elaboração e a análise dessas estimativas poderão conduzir à tomada de decisões sobre a viabilidade da ou das obras a realizar, nomeadamente no que diz respeito aos aspetos económicos, permitindo uma análise de decisão fundamentada quanto à realização dos investimentos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proceedings of the 2nd International Conference on Computational Cybernetics, Vienna University of Technology, August 30 - September 1, 2004

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relatório de Estágio para obtenção do grau de Mestre em Engenharia na Área de Especializção em Edificações

Relevância:

10.00% 10.00%

Publicador:

Resumo:

“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The application of mathematical methods and computer algorithms in the analysis of economic and financial data series aims to give empirical descriptions of the hidden relations between many complex or unknown variables and systems. This strategy overcomes the requirement for building models based on a set of ‘fundamental laws’, which is the paradigm for studying phenomena usual in physics and engineering. In spite of this shortcut, the fact is that financial series demonstrate to be hard to tackle, involving complex memory effects and a apparently chaotic behaviour. Several measures for describing these objects were adopted by market agents, but, due to their simplicity, they are not capable to cope with the diversity and complexity embedded in the data. Therefore, it is important to propose new measures that, on one hand, are highly interpretable by standard personal but, on the other hand, are capable of capturing a significant part of the dynamical effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The game of football demands new computational approaches to measure individual and collective performance. Understanding the phenomena involved in the game may foster the identification of strengths and weaknesses, not only of each player, but also of the whole team. The development of assertive quantitative methodologies constitutes a key element in sports training. In football, the predictability and stability inherent in the motion of a given player may be seen as one of the most important concepts to fully characterise the variability of the whole team. This paper characterises the predictability and stability levels of players during an official football match. A Fractional Calculus (FC) approach to define a player’s trajectory. By applying FC, one can benefit from newly considered modeling perspectives, such as the fractional coefficient, to estimate a player’s predictability and stability. This paper also formulates the concept of attraction domain, related to the tactical region of each player, inspired by stability theory principles. To compare the variability inherent in the player’s process variables (e.g., distance covered) and to assess his predictability and stability, entropy measures are considered. Experimental results suggest that the most predictable player is the goalkeeper while, conversely, the most unpredictable players are the midfielders. We also conclude that, despite his predictability, the goalkeeper is the most unstable player, while lateral defenders are the most stable during the match.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fractional dynamics is a growing topic in theoretical and experimental scientific research. A classical problem is the initialization required by fractional operators. While the problem is clear from the mathematical point of view, it constitutes a challenge in applied sciences. This paper addresses the problem of initialization and its effect upon dynamical system simulation when adopting numerical approximations. The results are compatible with system dynamics and clarify the formulation of adequate values for the initial conditions in numerical simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently simple limiting functions establishing upper and lower bounds on the Mittag-Leffler function were found. This paper follows those expressions to design an efficient algorithm for the approximate calculation of expressions usual in fractional-order control systems. The numerical experiments demonstrate the superior efficiency of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper formulates a novel expression for entropy inspired in the properties of Fractional Calculus. The characteristics of the generalized fractional entropy are tested both in standard probability distributions and real world data series. The results reveal that tuning the fractional order allow an high sensitivity to the signal evolution, which is useful in describing the dynamics of complex systems. The concepts are also extended to relative distances and tested with several sets of data, confirming the goodness of the generalization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies several topics related with the concept of “fractional” that are not directly related with Fractional Calculus, but can help the reader in pursuit new research directions. We introduce the concept of non-integer positional number systems, fractional sums, fractional powers of a square matrix, tolerant computing and FracSets, negative probabilities, fractional delay discrete-time linear systems, and fractional Fourier transform.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper explores the calculation of fractional integrals by means of the time delay operator. The study starts by reviewing the memory properties of fractional operators and their relationship with time delay. Based on the time response of the Mittag-Leffler function an approximation of fractional integrals consisting of time delayed samples is proposed. The tuning of the approximation is optimized by means of a genetic algorithm. The results demonstrate the feasibility of the new perspective and the limits of their application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies the dynamical properties of systems with backlash and impact phenomena. This type of non-linearity can be tackled in the perspective of the fractional calculus theory. Fractional and integer order models are compared and their influence upon the emerging dynamics is analysed. It is demonstrated that fractional models can memorize dynamical effects due to multiple micro-collisions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.