1000 resultados para Economia - Modelos matemáticos
Resumo:
El presente proyecto se enmarca en el área de métodos formales para computación; el objetivo de los métodos formales es asegurar, a través de herramientas lógicas y matemáticas, que sistemas computacionales satisfacen ciertas propiedades. El campo de semántica de lenguajes de programación trata justamente de construir modelos matemáticos que den cuenta de las diferentes características de cada lenguaje (estado mutable, mecanismos de paso de parámetros, órdenes de ejecución, etc.); permitiendo razonar de una manera abstracta, en vez de lidiar con las peculiaridades de implementaciones o las vaguezas de descripciones informales. Como las pruebas formales de corrección son demasiado intrincadas, es muy conveniente realizar estos desarrollos teóricos con la ayuda de asistentes de prueba. Este proceso de formalizar y corrobar aspectos semánticos a través de un asistente se denomina mecanización de semántica. Este proyecto – articulado en tres líneas: semántica de teoría de tipos, implementación de un lenguaje con tipos dependientes y semántica de lenguajes imperativos con alto orden - se propone realizar avances en el estudio semántico de lenguajes de programación, mecanizar dichos resultados, e implementar un lenguaje con tipos dependientes con la intención de que se convierta, en un mediano plazo, en un asistente de pruebas. En la línea de semántica de teoría de tipos los objetivos son: (a) extender el método de normalización por evaluación para construcciones no contempladas aun en la literatura, (b) probar la adecuación de la implementación en Haskell de dicho método de normalización, y (c) construir nuevos modelos categóricos de teoría de tipos. El objetivo de la segunda línea es el diseño e implementación de un lenguaje con tipos dependientes con la intención de que el mismo se convierta en un asistente de pruebas. Una novedad de esta implementación es que el algoritmo de chequeo de tipos es correcto y completo respecto al sistema formal, gracias a resultados ya obtenidos; además la implementación en Haskell del algoritmo de normalización (fundamental para el type-checking) también tendrá su prueba de corrección. El foco de la tercera línea está en el estudio de lenguajes de programación que combinan aspectos imperativos (estado mutable) con características de lenguajes funcionales (procedimientos y funciones). Por un lado se avanzará en la mecanización de pruebas de corrección de compiladores para lenguajes Algollike. El segundo aspecto de esta línea será la definición de semánticas operacional y denotacional del lenguaje de programación Lua y la posterior caracterización del mismo a partir de ellas. Para lograr dichos objetivos hemos dividido las tareas en actividades con metas graduales y que constituyen en sí mismas aportes al estado del arte de cada una de las líneas. La importancia académica de este proyecto radica en los avances teóricos que se propone en la línea de semántica de teoría de tipos, en las contribución para la construcción de pruebas mecanizadas de corrección de compiladores, en el aporte que constituye la definición de una semántica formal para el lenguaje Lua, y en el desarrollo de un lenguaje con tipos dependientes cuyos algoritmos más importantes están respaldados por pruebas de corrección. Además, a nivel local, este proyecto permitirá incorporar cuatro integrantes al grupo de “Semántica de la programación”.
Resumo:
Com este trabalho pretendo compreender o papel das histórias com matemática na motivação dos alunos durante a resolução de tarefas matemáticas, partindo das seguintes questões de investigação: a) Qual a relação que os alunos estabelecem com tarefas matemáticas construídas a partir de modelos matemáticos presentes em histórias? b) Que tipo de conhecimentos matemáticos surgem quando os alunos resolvem tarefas construídas a partir de um modelo matemático presente numa história? c) Como se desenvolve a comunicação matemática dos alunos quando se utilizam tarefas construídas a partir de modelos existentes em histórias? O trabalho foi realizado com uma turma de 2º e 3º anos de escolaridade, que têm desenvolvido com a professora cooperante, ao longo do tempo, um excelente trabalho na disciplina de matemática. Para o desenvolvimento deste estudo optei por utilizar uma metodologia de investigação qualitativa, como investigadora participante. Saliento, ainda, que a recolha de dados ocorreu ao longo do estágio da Unidade Curricular: Prática de Ensino Supervisionado. Durante um total de onze aulas, a análise de dados baseou-se em registos de vídeo, produções individuais dos alunos, produções dos alunos em grande grupo, e ainda na observação das interações geradas durante a análise e discussão das estratégias apresentadas pelos alunos, durante a realização das tarefas. As tarefas matemáticas apresentadas aos alunos, para o desenvolvimento deste estudo, surgiram de modelos matemáticos presentes no conto “Ainda não estão contentes?”, inserido no livro Conto Contigo, de António Torrado. Durante a realização das tarefas foi possível desenvolver conceitos que se inserem nos temas matemáticos de Números e Operações e Medida. Os resultados deste trabalho, poderão ser desenvolvidos no futuro, partindo de outros estudos e utilizando diferentes modelos matemáticos, outras histórias, outros contextos educativos, de modo a que seja possível provar que a matemática não tem de ser vista como uma disciplina difícil e angustiante para os alunos, porque existe sempre a hipótese de através de histórias, envolver os alunos nas suas aprendizagens, motivando-os para realizar tarefas matemáticas e desenvolver conhecimento matemático.
Resumo:
The synthetic control (SC) method has been recently proposed as an alternative to estimate treatment effects in comparative case studies. The SC relies on the assumption that there is a weighted average of the control units that reconstruct the potential outcome of the treated unit in the absence of treatment. If these weights were known, then one could estimate the counterfactual for the treated unit using this weighted average. With these weights, the SC would provide an unbiased estimator for the treatment effect even if selection into treatment is correlated with the unobserved heterogeneity. In this paper, we revisit the SC method in a linear factor model where the SC weights are considered nuisance parameters that are estimated to construct the SC estimator. We show that, when the number of control units is fixed, the estimated SC weights will generally not converge to the weights that reconstruct the factor loadings of the treated unit, even when the number of pre-intervention periods goes to infinity. As a consequence, the SC estimator will be asymptotically biased if treatment assignment is correlated with the unobserved heterogeneity. The asymptotic bias only vanishes when the variance of the idiosyncratic error goes to zero. We suggest a slight modification in the SC method that guarantees that the SC estimator is asymptotically unbiased and has a lower asymptotic variance than the difference-in-differences (DID) estimator when the DID identification assumption is satisfied. If the DID assumption is not satisfied, then both estimators would be asymptotically biased, and it would not be possible to rank them in terms of their asymptotic bias.
Resumo:
Regular vine copulas are multivariate dependence models constructed from pair-copulas (bivariate copulas). In this paper, we allow the dependence parameters of the pair-copulas in a D-vine decomposition to be potentially time-varying, following a nonlinear restricted ARMA(1,m) process, in order to obtain a very flexible dependence model for applications to multivariate financial return data. We investigate the dependence among the broad stock market indexes from Germany (DAX), France (CAC 40), Britain (FTSE 100), the United States (S&P 500) and Brazil (IBOVESPA) both in a crisis and in a non-crisis period. We find evidence of stronger dependence among the indexes in bear markets. Surprisingly, though, the dynamic D-vine copula indicates the occurrence of a sharp decrease in dependence between the indexes FTSE and CAC in the beginning of 2011, and also between CAC and DAX during mid-2011 and in the beginning of 2008, suggesting the absence of contagion in these cases. We also evaluate the dynamic D-vine copula with respect to Value-at-Risk (VaR) forecasting accuracy in crisis periods. The dynamic D-vine outperforms the static D-vine in terms of predictive accuracy for our real data sets.
Resumo:
A análise de sentimentos é uma ferramenta com grande potencial, podendo ser aplicada em vários contextos. Esta dissertação tem com o objetivo analisar a viabilidade da aplicação da técnica numa base capturada do site de reclamações mais popular do Brasil, com a aplicação de técnicas de processamento de linguagem natural e de aprendizagem de máquinas é possível identificar padrões na satisfação ou insatisfação dos consumidores.
Resumo:
Matemática y sociedad, una relación determinante en la elección de temas de investigación. El caso de la Ciencia de la Información Universidad Nacional de La Plata. Departamento de Bibliotecología. Se presenta un tema de llamativa vigencia en un contexto cambiante como el actual ya que, a pesar de que han pasado más de tres décadas desde los primeros trabajos de L. Santaló, S. Papert y otros, sobre el tema, todo parece seguir como entonces. Una sociedad que abrió sus puertas a las tecnologías de la información y las comunicaciones en muy poco tiempo, permanece inmune a los embates de los profesores de matemática. Se expondrá sobre experiencias de aplicaciones de la matemática en facultades de humanidades de universidades españolas y argentinas, en el marco de los Estudios Métricos de la Información (EMI), y sobre experiencias de incorporación de los EMI a los estudios de Bibliotecología y Ciencia de la Información. Se presentará la problemática de la utilización de modelos matemáticos en temas de recuperación de información en ambiente de bases de datos y en Internet, así como en la evaluación de la efectividad de los sistemas de recuperación de información.
Resumo:
The principal effluent in the oil industry is the produced water, which is commonly associated to the produced oil. It presents a pronounced volume of production and it can be reflected on the environment and society, if its discharge is unappropriated. Therefore, it is indispensable a valuable careful to establish and maintain its management. The traditional treatment of produced water, usualy includes both tecniques, flocculation and flotation. At flocculation processes, there are traditional floculant agents that aren’t well specified by tecnichal information tables and still expensive. As for the flotation process, it’s the step in which is possible to separate the suspended particles in the effluent. The dissolved air flotation (DAF) is a technique that has been consolidating economically and environmentally, presenting great reliability when compared with other processes. The DAF is presented as a process widely used in various fields of water and wastewater treatment around the globe. In this regard, this study was aimed to evaluate the potential of an alternative natural flocculant agent based on Moringa oleifera to reduce the amount of oil and grease (TOG) in produced water from the oil industry by the method of flocculation/DAF. the natural flocculant agent was evaluated by its efficacy, as well as its efficiency when compared with two commercial flocculant agents normally used by the petroleum industry. The experiments were conducted following an experimental design and the overall efficiencies for all flocculants were treated through statistical calculation based on the use of STATISTICA software version 10.0. Therefore, contour surfaces were obtained from the experimental design and were interpreted in terms of the response variable removal efficiency TOG (total oil and greases). The plan still allowed to obtain mathematical models for calculating the response variable in the studied conditions. Commercial flocculants showed similar behavior, with an average overall efficiency of 90% for oil removal, however it is the economical analysis the decisive factor to choose one of these flocculant agents to the process. The natural alternative flocculant agent based on Moringa oleifera showed lower separation efficiency than those of commercials one (average 70%), on the other hand this flocculant causes less environmental impacts and it´s less expensive
Resumo:
Actually, Brazil is one of the larger fruit producer worldwide, with most of its production being consumed in nature way or either as juice or pulp. It is important to highlig ht in the fruit productive chain there are a lot lose due mainly to climate reasons, as well as storage, transportation, season, market, etc. It is known that in the pulp and fruit processing industy a yield of 50% (in mass) is usually obtained, with the other part discarded as waste. However, since most this waste has a high nutrient content it can be used to generate added - value products. In this case, drying plays an important role as an alternative process in order to improve these wastes generated by the fruit industry. However, despite the advantage of using this technique in order to improve such wastes, issues as a higher power demand as well as the thermal efficiency limitation should be addressed. Therefore, the control of the main variables in t his drying process is quite important in order to obtain operational conditions to produce a final product with the target specification as well as with a lower power cost. M athematical models can be applied to this process as a tool in order to optimize t he best conditions. The main aim of this work was to evaluate the drying behaviour of a guava industrial pulp waste using a batch system with a convective - tray dryer both experimentally and using mathematical modeling. In the experimental study , the dryin g carried out using a group of trays as well as the power consume were assayed as response to the effects of operational conditions (temperature, drying air flow rate and solid mass). Obtained results allowed observing the most significant variables in the process. On the other hand, the phenomenological mathematical model was validated and allowed to follow the moisture profile as well as the temperature in the solid and gas phases in every tray. Simulation results showed the most favorable procedure to o btain the minimum processing time as well as the lower power demand.
Resumo:
Skeletal muscle consists of muscle fiber types that have different physiological and biochemical characteristics. Basically, the muscle fiber can be classified into type I and type II, presenting, among other features, contraction speed and sensitivity to fatigue different for each type of muscle fiber. These fibers coexist in the skeletal muscles and their relative proportions are modulated according to the muscle functionality and the stimulus that is submitted. To identify the different proportions of fiber types in the muscle composition, many studies use biopsy as standard procedure. As the surface electromyography (EMGs) allows to extract information about the recruitment of different motor units, this study is based on the assumption that it is possible to use the EMG to identify different proportions of fiber types in a muscle. The goal of this study was to identify the characteristics of the EMG signals which are able to distinguish, more precisely, different proportions of fiber types. Also was investigated the combination of characteristics using appropriate mathematical models. To achieve the proposed objective, simulated signals were developed with different proportions of motor units recruited and with different signal-to-noise ratios. Thirteen characteristics in function of time and the frequency were extracted from emulated signals. The results for each extracted feature of the signals were submitted to the clustering algorithm k-means to separate the different proportions of motor units recruited on the emulated signals. Mathematical techniques (confusion matrix and analysis of capability) were implemented to select the characteristics able to identify different proportions of muscle fiber types. As a result, the average frequency and median frequency were selected as able to distinguish, with more precision, the proportions of different muscle fiber types. Posteriorly, the features considered most able were analyzed in an associated way through principal component analysis. Were found two principal components of the signals emulated without noise (CP1 and CP2) and two principal components of the noisy signals (CP1 and CP2 ). The first principal components (CP1 and CP1 ) were identified as being able to distinguish different proportions of muscle fiber types. The selected characteristics (median frequency, mean frequency, CP1 and CP1 ) were used to analyze real EMGs signals, comparing sedentary people with physically active people who practice strength training (weight training). The results obtained with the different groups of volunteers show that the physically active people obtained higher values of mean frequency, median frequency and principal components compared with the sedentary people. Moreover, these values decreased with increasing power level for both groups, however, the decline was more accented for the group of physically active people. Based on these results, it is assumed that the volunteers of the physically active group have higher proportions of type II fibers than sedentary people. Finally, based on these results, we can conclude that the selected characteristics were able to distinguish different proportions of muscle fiber types, both for the emulated signals as to the real signals. These characteristics can be used in several studies, for example, to evaluate the progress of people with myopathy and neuromyopathy due to the physiotherapy, and also to analyze the development of athletes to improve their muscle capacity according to their sport. In both cases, the extraction of these characteristics from the surface electromyography signals provides a feedback to the physiotherapist and the coach physical, who can analyze the increase in the proportion of a given type of fiber, as desired in each case.
Resumo:
This study aims to evaluate the uncertainty associated with measurements made by aneroid sphygmomanometer, neonatal electronic balance and electrocautery. Therefore, were performing repeatability tests on all devices for the subsequent execution of normality tests using Shapiro-Wilk; identification of influencing factors that affect the measurement result of each measurement; proposition of mathematical models to calculate the measurement uncertainty associated with measuring evaluated for all equipament and calibration for neonatal electronic balance; evaluation of the measurement uncertainty; and development of a computer program in Java language to systematize the calibration uncertainty of estimates and measurement uncertainty. It was proposed and carried out 23 factorial design for aneroid sphygmomanometer order to investigate the effect of temperature factors, patient and operator and another 32 planning for electrocautery, where it investigated the effects of temperature factors and output electrical power. The expanded uncertainty associated with the measurement of blood pressure significantly reduced the extent of the patient classification tracks. In turn, the expanded uncertainty associated with the mass measurement with neonatal balance indicated a variation of about 1% in the dosage of medication to neonates. Analysis of variance (ANOVA) and the Turkey test indicated significant and indirectly proportional effects of temperature factor in cutting power values and clotting indicated by electrocautery and no significant effect of factors investigated for aneroid sphygmomanometer.
Resumo:
El juego como modo de aprendizaje es algo inherente no sólo al ser humano sino, en general, al reino animal. Para cualquier mamífero el juego constituye la forma de aprendizaje fundamental. A través del juego se aprende a luchar, a defenderse y las normas básicas de convivencia en la manada. Sin embargo en el ser humano, juego y aprendizaje se han ido desligando progresivamente, excepto en las etapas iniciales de crecimiento, en las que los niños siguen aprendiendo los comportamientos más básicos a través de juegos. A medida que vamos avanzando en la escuela, se va abandonando el juego, contraponiendo las actividades lúdicas a las estrictamente relacionadas con el trabajo, con un aprendizaje más costoso. De esta forma al llegar a la etapa universitaria, el juego se ha abandonado por completo como forma de aprendizaje. No es fácil definir lo que es el juego o cuáles son sus características. Tiene una fuerte componente cultural, actividades que unas culturas pueden considerar eminentemente lúdicas, no lo serán en contextos culturales distintos. No obstante, una vez admitida la importancia del juego en el desarrollo de la personalidad, sí podemos establecer algunas de las funciones básicas que el juego desempeña en el ser humano, en relación con el perfeccionamiento y adquisición de habilidades tanto cognitivas como sociales o conductuales. El juego facilita la integración de experiencias en la conducta, contribuye a inhibir conductas no admitidas socialmente y a reforzar aquéllas con una mayor aceptación dentro del marco cultural de referencia. Mejora considerablemente la interacción social y la adquisición de las habilidades básicas necesarias para que se produzca dicha interacción de modo satisfactorio. En el caso de juegos competitivos, enseña a manejar situaciones desfavorables, a soportar y superar la frustración. Tradicionalmente, los juegos se han usado en los niveles iniciales de enseñanza, sin embargo son una poderosa herramienta también en el nivel universitario, especialmente para promover el aprendizaje activo y la adquisición de variadas competencias profesionales. En este proyecto se plantea la elaboración de una herramienta para la creación de simuladores de juegos de mesa con fines didácticos.
Resumo:
We investigate by means of Monte Carlo simulation and finite-size scaling analysis the critical properties of the three dimensional O (5) non-linear σ model and of the antiferromagnetic RP^(2) model, both of them regularized on a lattice. High accuracy estimates are obtained for the critical exponents, universal dimensionless quantities and critical couplings. It is concluded that both models belong to the same universality class, provided that rather non-standard identifications are made for the momentum-space propagator of the RP^(2) model. We have also investigated the phase diagram of the RP^(2) model extended by a second-neighbor interaction. A rich phase diagram is found, where most of the phase transitions are of the first order.
Resumo:
The phase diagram of the simplest approximation to double-exchange systems, the bosonic double-exchange model with antiferromagnetic (AFM) superexchange coupling, is fully worked out by means of Monte Carlo simulations, large-N expansions, and variational mean-field calculations. We find a rich phase diagram, with no first-order phase transitions. The most surprising finding is the existence of a segmentlike ordered phase at low temperature for intermediate AFM coupling which cannot be detected in neutron-scattering experiments. This is signaled by a maximum (a cusp) in the specific heat. Below the phase transition, only short-range ordering would be found in neutron scattering. Researchers looking for a quantum critical point in manganites should be wary of this possibility. Finite-size scaling estimates of critical exponents are presented, although large scaling corrections are present in the reachable lattice sizes.
Resumo:
The phase diagram of the double perovskites of the type Sr_(2-x)La_(x)FeMoO_(6) is analyzed, with and without disorder due to antisites. In addition to an homogeneous half metallic ferrimagnetic phase in the absence of doping and disorder, we find antiferromagnetic phases at large dopings, and other ferrimagnetic phases with lower saturation magnetization, in the presence of disorder.
Resumo:
We study the fluctuation-dissipation relations for a three dimensional Ising spin glass in a magnetic field both in the high temperature phase as well as in the low temperature one. In the region of times simulated we have found that our results support a picture of the low temperature phase with broken replica symmetry, but a droplet behavior cannot be completely excluded.