956 resultados para LINEAR FUNCTIONS
Resumo:
Constrained intervals, intervals as a mapping from [0, 1] to polynomials of degree one (linear functions) with non-negative slopes, and arithmetic on constrained intervals generate a space that turns out to be a cancellative abelian monoid albeit with a richer set of properties than the usual (standard) space of interval arithmetic. This means that not only do we have the classical embedding as developed by H. Radström, S. Markov, and the extension of E. Kaucher but the properties of these polynomials. We study the geometry of the embedding of intervals into a quasilinear space and some of the properties of the mapping of constrained intervals into a space of polynomials. It is assumed that the reader is familiar with the basic notions of interval arithmetic and interval analysis. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Matematica Aplicada e Computacional - FCT
Resumo:
Ng and Kotz (1995) introduced a distribution that provides greater flexibility to extremes. We define and study a new class of distributions called the Kummer beta generalized family to extend the normal, Weibull, gamma and Gumbel distributions, among several other well-known distributions. Some special models are discussed. The ordinary moments of any distribution in the new family can be expressed as linear functions of probability weighted moments of the baseline distribution. We examine the asymptotic distributions of the extreme values. We derive the density function of the order statistics, mean absolute deviations and entropies. We use maximum likelihood estimation to fit the distributions in the new class and illustrate its potentiality with an application to a real data set.
Resumo:
This paper presents a technique for performing analog design synthesis at circuit level providing feedback to the designer through the exploration of the Pareto frontier. A modified simulated annealing which is able to perform crossover with past anchor points when a local minimum is found which is used as the optimization algorithm on the initial synthesis procedure. After all specifications are met, the algorithm searches for the extreme points of the Pareto frontier in order to obtain a non-exhaustive exploration of the Pareto front. Finally, multi-objective particle swarm optimization is used to spread the results and to find a more accurate frontier. Piecewise linear functions are used as single-objective cost functions to produce a smooth and equal convergence of all measurements to the desired specifications during the composition of the aggregate objective function. To verify the presented technique two circuits were designed, which are: a Miller amplifier with 96 dB Voltage gain, 15.48 MHz unity gain frequency, slew rate of 19.2 V/mu s with a current supply of 385.15 mu A, and a complementary folded cascode with 104.25 dB Voltage gain, 18.15 MHz of unity gain frequency and a slew rate of 13.370 MV/mu s. These circuits were synthesized using a 0.35 mu m technology. The results show that the method provides a fast approach for good solutions using the modified SA and further good Pareto front exploration through its connection to the particle swarm optimization algorithm.
Resumo:
In this paper we propose methods for smooth hazard estimation of a time variable where that variable is interval censored. These methods allow one to model the transformed hazard in terms of either smooth (smoothing splines) or linear functions of time and other relevant time varying predictor variables. We illustrate the use of this method on a dataset of hemophiliacs where the outcome, time to seroconversion for HIV, is interval censored and left-truncated.
Resumo:
The sensitivity of the gas flow field to changes in different initial conditions has been studied for the case of a highly simplified cometary nucleus model. The nucleus model simulated a homogeneously outgassing sphere with a more active ring around an axis of symmetry. The varied initial conditions were the number density of the homogeneous region, the surface temperature, and the composition of the flow (varying amounts of H2O and CO2) from the active ring. The sensitivity analysis was performed using the Polynomial Chaos Expansion (PCE) method. Direct Simulation Monte Carlo (DSMC) was used for the flow, thereby allowing strong deviations from local thermal equilibrium. The PCE approach can be used to produce a sensitivity analysis with only four runs per modified input parameter and allows one to study and quantify non-linear responses of measurable parameters to linear changes in the input over a wide range. Hence the PCE allows one to obtain a functional relationship between the flow field properties at every point in the inner coma and the input conditions. It is for example shown that the velocity and the temperature of the background gas are not simply linear functions of the initial number density at the source. As probably expected, the main influence on the resulting flow field parameter is the corresponding initial parameter (i.e. the initial number density determines the background number density, the temperature of the surface determines the flow field temperature, etc.). However, the velocity of the flow field is also influenced by the surface temperature while the number density is not sensitive to the surface temperature at all in our model set-up. Another example is the change in the composition of the flow over the active area. Such changes can be seen in the velocity but again not in the number density. Although this study uses only a simple test case, we suggest that the approach, when applied to a real case in 3D, should assist in identifying the sensitivity of gas parameters measured in situ by, for example, the Rosetta spacecraft to the surface boundary conditions and vice versa.
Resumo:
Temperature-dependent population growth of diamondback moth (DBM) Plutella xylostella (L.), a prolific insect pest of crucifer vegetables, was studied under six constant temperatures in the laboratory. The objective of the study was to predict the impacts of temperature changes on the population of DBM at high-resolution scales along altitudinal gradients and under climate change scenarios. Non-linear functions were fitted on the data for modeling the development, mortality, longevity and oviposition of the pest. The best-fitted functions for each life stage were compiled for estimating the life table parameters of the species by stochastic simulations. To quantify the impacts on the pest, three indices (establishment, generation and activity) were computed using the estimates of life table parameters and temperature data obtained at local scale (current scenario 2013) and downscaled climate change data (future scenario 2055) from the AFRICLIM database. To measure and represent the impacts of temperature change along the altitude on the pest; the indices were mapped along the altitudinal gradients of Kilimanjaro and Taita Hills, in Tanzania and Kenya, respectively. Potential impact of the changes between climate scenarios 2013 and 2055 was assessed. The data files included in this database were utilized for the above analysis to develop temperature dependent phenology of Plutella xylostella to assess current and future distribution along eastern African Afromontanes.
Resumo:
Dislocation mobility —the relation between applied stress and dislocation velocity—is an important property to model the mechanical behavior of structural materials. These mobilities reflect the interaction between the dislocation core and the host lattice and, thus, atomistic resolution is required to capture its details. Because the mobility function is multiparametric, its computation is often highly demanding in terms of computational requirements. Optimizing how tractions are applied can be greatly advantageous in accelerating convergence and reducing the overall computational cost of the simulations. In this paper we perform molecular dynamics simulations of ½ 〈1 1 1〉 screw dislocation motion in tungsten using step and linear time functions for applying external stress. We find that linear functions over time scales of the order of 10–20 ps reduce fluctuations and speed up convergence to the steady-state velocity value by up to a factor of two.
Resumo:
La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.
Resumo:
In Information Filtering (IF) a user may be interested in several topics in parallel. But IF systems have been built on representational models derived from Information Retrieval and Text Categorization, which assume independence between terms. The linearity of these models results in user profiles that can only represent one topic of interest. We present a methodology that takes into account term dependencies to construct a single profile representation for multiple topics, in the form of a hierarchical term network. We also introduce a series of non-linear functions for evaluating documents against the profile. Initial experiments produced positive results.
Resumo:
* This work has been supported by the Office of Naval Research Contract Nr. N0014-91-J1343, the Army Research Office Contract Nr. DAAD 19-02-1-0028, the National Science Foundation grants DMS-0221642 and DMS-0200665, the Deutsche Forschungsgemeinschaft grant SFB 401, the IHP Network “Breaking Complexity” funded by the European Commission and the Alexan- der von Humboldt Foundation.
Resumo:
Numerical optimization is a technique where a computer is used to explore design parameter combinations to find extremes in performance factors. In multi-objective optimization several performance factors can be optimized simultaneously. The solution to multi-objective optimization problems is not a single design, but a family of optimized designs referred to as the Pareto frontier. The Pareto frontier is a trade-off curve in the objective function space composed of solutions where performance in one objective function is traded for performance in others. A Multi-Objective Hybridized Optimizer (MOHO) was created for the purpose of solving multi-objective optimization problems by utilizing a set of constituent optimization algorithms. MOHO tracks the progress of the Pareto frontier approximation development and automatically switches amongst those constituent evolutionary optimization algorithms to speed the formation of an accurate Pareto frontier approximation. Aerodynamic shape optimization is one of the oldest applications of numerical optimization. MOHO was used to perform shape optimization on a 0.5-inch ballistic penetrator traveling at Mach number 2.5. Two objectives were simultaneously optimized: minimize aerodynamic drag and maximize penetrator volume. This problem was solved twice. The first time the problem was solved by using Modified Newton Impact Theory (MNIT) to determine the pressure drag on the penetrator. In the second solution, a Parabolized Navier-Stokes (PNS) solver that includes viscosity was used to evaluate the drag on the penetrator. The studies show the difference in the optimized penetrator shapes when viscosity is absent and present in the optimization. In modern optimization problems, objective function evaluations may require many hours on a computer cluster to perform these types of analysis. One solution is to create a response surface that models the behavior of the objective function. Once enough data about the behavior of the objective function has been collected, a response surface can be used to represent the actual objective function in the optimization process. The Hybrid Self-Organizing Response Surface Method (HYBSORSM) algorithm was developed and used to make response surfaces of objective functions. HYBSORSM was evaluated using a suite of 295 non-linear functions. These functions involve from 2 to 100 variables demonstrating robustness and accuracy of HYBSORSM.
Resumo:
The late Neogene was a time of cryosphere development in the northern hemisphere. The present study was carried out to estimate the sea surface temperature (SST) change during this period based on the quantitative planktonic foraminiferal data of 8 DSDP sites in the western Pacific. Target factor analysis has been applied to the conventional transfer function approach to overcome the no-analog conditions caused by evolutionary faunal changes. By applying this technique through a combination of time-slice and time-series studies, the SST history of the last 5.3 Ma has been reconstructed for the low latitude western Pacific. Although the present data set is close to the statistical limits of factor analysis, the clear presence of sensible variations in individual SST time-series suggests the feasibility and reliability of this method in paleoceanographic studies. The estimated SST curves display the general trend of the temperature fluctuations and reveal three major cool periods in the late Neogene, i.e. the early Pliocene (4.7 3.5 Ma), the late Pliocene (3.1-2.7 Ma), and the latest Pliocene to early Pleistocene (2.2-1.0 Ma). Cool events are reflected in the increase of seasonality and meridional SST gradient in the subtropical area. The latest Pliocene to early Pleistocene cooling is most important in the late Neogene climatic evolution. It differs from the previous cool events in its irreversible, steplike change in SST, which established the glacial climate characteristic of the late Pleistocene. The winter and summer SST decreased by 3.3-5.4°C and 1.0 2.1C in the subtropics, by 0.9°C and 0.6C in the equatorial region, and showed little or no cooling in the tropics. Moreover, this cooling event occurred as a gradual SST decrease during 2.2 1.0 Ma at the warmer subtropical sites, while that at cooler subtropical site was an abrupt SST drop at 2.2 Ma. In contrast, equatorial and tropical western Pacific experienced only minor SST change in the entire late Neogene. In general, subtropics was much more sensitive to climatic forcing than tropics and the cooling events were most extensive in the cooler subtropics. The early Pliocene cool periods can be correlated to the Antarctic ice volume fluctuation, and the latest Pliocene early Pleistocene cooling reflects the climatic evolution during the cryosphere development of the northern hemisphere.
Resumo:
O presente trabalho descreve uma proposta de atividade educacional direcionada para professores de Matemática, envolvendo situações-problema no ensino de Matemática Financeira para ser aplicado com alunos do Ensino Médio. Tais atividades tem como objetivo fornecer um contexto real, no qual o estudante esteja inserido. O trabalho se divide em quatro partes: a introdução de uma situaçãoproblema envolvendo juros simples, o conhecimento matemático, a resolução da situação-problema e a proposta de atividade educacional. Diferenciando-se do que usualmente é encontrado nos livros didáticos, a proposta aqui apresentada propõe estudar conteúdos matemáticos de forma articulada, envolvendo o conceito de porcentagem vinculado com funções lineares e juros simples com função afim e progressão aritmética. Dessa forma, é apresentada uma sequência de aulas envolvendo situações-problema através de atividades, adequadas para os alunos.