868 resultados para Física-Modelos matemáticos
Resumo:
Matemática y sociedad, una relación determinante en la elección de temas de investigación. El caso de la Ciencia de la Información Universidad Nacional de La Plata. Departamento de Bibliotecología. Se presenta un tema de llamativa vigencia en un contexto cambiante como el actual ya que, a pesar de que han pasado más de tres décadas desde los primeros trabajos de L. Santaló, S. Papert y otros, sobre el tema, todo parece seguir como entonces. Una sociedad que abrió sus puertas a las tecnologías de la información y las comunicaciones en muy poco tiempo, permanece inmune a los embates de los profesores de matemática. Se expondrá sobre experiencias de aplicaciones de la matemática en facultades de humanidades de universidades españolas y argentinas, en el marco de los Estudios Métricos de la Información (EMI), y sobre experiencias de incorporación de los EMI a los estudios de Bibliotecología y Ciencia de la Información. Se presentará la problemática de la utilización de modelos matemáticos en temas de recuperación de información en ambiente de bases de datos y en Internet, así como en la evaluación de la efectividad de los sistemas de recuperación de información.
Resumo:
The principal effluent in the oil industry is the produced water, which is commonly associated to the produced oil. It presents a pronounced volume of production and it can be reflected on the environment and society, if its discharge is unappropriated. Therefore, it is indispensable a valuable careful to establish and maintain its management. The traditional treatment of produced water, usualy includes both tecniques, flocculation and flotation. At flocculation processes, there are traditional floculant agents that aren’t well specified by tecnichal information tables and still expensive. As for the flotation process, it’s the step in which is possible to separate the suspended particles in the effluent. The dissolved air flotation (DAF) is a technique that has been consolidating economically and environmentally, presenting great reliability when compared with other processes. The DAF is presented as a process widely used in various fields of water and wastewater treatment around the globe. In this regard, this study was aimed to evaluate the potential of an alternative natural flocculant agent based on Moringa oleifera to reduce the amount of oil and grease (TOG) in produced water from the oil industry by the method of flocculation/DAF. the natural flocculant agent was evaluated by its efficacy, as well as its efficiency when compared with two commercial flocculant agents normally used by the petroleum industry. The experiments were conducted following an experimental design and the overall efficiencies for all flocculants were treated through statistical calculation based on the use of STATISTICA software version 10.0. Therefore, contour surfaces were obtained from the experimental design and were interpreted in terms of the response variable removal efficiency TOG (total oil and greases). The plan still allowed to obtain mathematical models for calculating the response variable in the studied conditions. Commercial flocculants showed similar behavior, with an average overall efficiency of 90% for oil removal, however it is the economical analysis the decisive factor to choose one of these flocculant agents to the process. The natural alternative flocculant agent based on Moringa oleifera showed lower separation efficiency than those of commercials one (average 70%), on the other hand this flocculant causes less environmental impacts and it´s less expensive
Resumo:
Actually, Brazil is one of the larger fruit producer worldwide, with most of its production being consumed in nature way or either as juice or pulp. It is important to highlig ht in the fruit productive chain there are a lot lose due mainly to climate reasons, as well as storage, transportation, season, market, etc. It is known that in the pulp and fruit processing industy a yield of 50% (in mass) is usually obtained, with the other part discarded as waste. However, since most this waste has a high nutrient content it can be used to generate added - value products. In this case, drying plays an important role as an alternative process in order to improve these wastes generated by the fruit industry. However, despite the advantage of using this technique in order to improve such wastes, issues as a higher power demand as well as the thermal efficiency limitation should be addressed. Therefore, the control of the main variables in t his drying process is quite important in order to obtain operational conditions to produce a final product with the target specification as well as with a lower power cost. M athematical models can be applied to this process as a tool in order to optimize t he best conditions. The main aim of this work was to evaluate the drying behaviour of a guava industrial pulp waste using a batch system with a convective - tray dryer both experimentally and using mathematical modeling. In the experimental study , the dryin g carried out using a group of trays as well as the power consume were assayed as response to the effects of operational conditions (temperature, drying air flow rate and solid mass). Obtained results allowed observing the most significant variables in the process. On the other hand, the phenomenological mathematical model was validated and allowed to follow the moisture profile as well as the temperature in the solid and gas phases in every tray. Simulation results showed the most favorable procedure to o btain the minimum processing time as well as the lower power demand.
Resumo:
Water injection in oil reservoirs is a recovery technique widely used for oil recovery. However, the injected water contains suspended particles that can be trapped, causing formation damage and injectivity decline. In such cases, it is necessary to stimulate the damaged formation looking forward to restore the injectivity of the injection wells. Injectivity decline causes a major negative impact to the economy of oil production, which is why, it is important to foresee the injectivity behavior for a good waterflooding management project. Mathematical models for injectivity losses allow studying the effect of the injected water quality, also the well and formation characteristics. Therefore, a mathematical model of injectivity losses for perforated injection wells was developed. The scientific novelty of this work relates to the modeling and prediction of injectivity decline in perforated injection wells, considering deep filtration and the formation of external cake in spheroidal perforations. The classic modeling for deep filtration was rewritten using spheroidal coordinates. The solution to the concentration of suspended particles was obtained analytically and the concentration of the retained particles, which cause formation damage, was solved numerically. The acquisition of the solution to impedance assumed a constant injection rate and the modified Darcy´s Law, defined as being the inverse of the normalized injectivity by the inverse of the initial injectivity. Finally, classic linear flow injectivity tests were performed within Berea sandstone samples, and within perforated samples. The parameters of the model, filtration and formation damage coefficients, obtained from the data, were used to verify the proposed modeling. The simulations showed a good fit to the experimental data, it was observed that the ratio between the particle size and pore has a large influence on the behavior of injectivity decline.
Resumo:
Skeletal muscle consists of muscle fiber types that have different physiological and biochemical characteristics. Basically, the muscle fiber can be classified into type I and type II, presenting, among other features, contraction speed and sensitivity to fatigue different for each type of muscle fiber. These fibers coexist in the skeletal muscles and their relative proportions are modulated according to the muscle functionality and the stimulus that is submitted. To identify the different proportions of fiber types in the muscle composition, many studies use biopsy as standard procedure. As the surface electromyography (EMGs) allows to extract information about the recruitment of different motor units, this study is based on the assumption that it is possible to use the EMG to identify different proportions of fiber types in a muscle. The goal of this study was to identify the characteristics of the EMG signals which are able to distinguish, more precisely, different proportions of fiber types. Also was investigated the combination of characteristics using appropriate mathematical models. To achieve the proposed objective, simulated signals were developed with different proportions of motor units recruited and with different signal-to-noise ratios. Thirteen characteristics in function of time and the frequency were extracted from emulated signals. The results for each extracted feature of the signals were submitted to the clustering algorithm k-means to separate the different proportions of motor units recruited on the emulated signals. Mathematical techniques (confusion matrix and analysis of capability) were implemented to select the characteristics able to identify different proportions of muscle fiber types. As a result, the average frequency and median frequency were selected as able to distinguish, with more precision, the proportions of different muscle fiber types. Posteriorly, the features considered most able were analyzed in an associated way through principal component analysis. Were found two principal components of the signals emulated without noise (CP1 and CP2) and two principal components of the noisy signals (CP1 and CP2 ). The first principal components (CP1 and CP1 ) were identified as being able to distinguish different proportions of muscle fiber types. The selected characteristics (median frequency, mean frequency, CP1 and CP1 ) were used to analyze real EMGs signals, comparing sedentary people with physically active people who practice strength training (weight training). The results obtained with the different groups of volunteers show that the physically active people obtained higher values of mean frequency, median frequency and principal components compared with the sedentary people. Moreover, these values decreased with increasing power level for both groups, however, the decline was more accented for the group of physically active people. Based on these results, it is assumed that the volunteers of the physically active group have higher proportions of type II fibers than sedentary people. Finally, based on these results, we can conclude that the selected characteristics were able to distinguish different proportions of muscle fiber types, both for the emulated signals as to the real signals. These characteristics can be used in several studies, for example, to evaluate the progress of people with myopathy and neuromyopathy due to the physiotherapy, and also to analyze the development of athletes to improve their muscle capacity according to their sport. In both cases, the extraction of these characteristics from the surface electromyography signals provides a feedback to the physiotherapist and the coach physical, who can analyze the increase in the proportion of a given type of fiber, as desired in each case.
Resumo:
This study aims to evaluate the uncertainty associated with measurements made by aneroid sphygmomanometer, neonatal electronic balance and electrocautery. Therefore, were performing repeatability tests on all devices for the subsequent execution of normality tests using Shapiro-Wilk; identification of influencing factors that affect the measurement result of each measurement; proposition of mathematical models to calculate the measurement uncertainty associated with measuring evaluated for all equipament and calibration for neonatal electronic balance; evaluation of the measurement uncertainty; and development of a computer program in Java language to systematize the calibration uncertainty of estimates and measurement uncertainty. It was proposed and carried out 23 factorial design for aneroid sphygmomanometer order to investigate the effect of temperature factors, patient and operator and another 32 planning for electrocautery, where it investigated the effects of temperature factors and output electrical power. The expanded uncertainty associated with the measurement of blood pressure significantly reduced the extent of the patient classification tracks. In turn, the expanded uncertainty associated with the mass measurement with neonatal balance indicated a variation of about 1% in the dosage of medication to neonates. Analysis of variance (ANOVA) and the Turkey test indicated significant and indirectly proportional effects of temperature factor in cutting power values and clotting indicated by electrocautery and no significant effect of factors investigated for aneroid sphygmomanometer.
Resumo:
A la industria alimentaria se le exigen productos seguros, nutritivos, apetecibles y de uso cómodo y rápido. Aunar todos esos calificativos en un solo alimento es ardua tarea. Valgan dos ejemplos. Un tratamiento conservante intenso, de buenas perspectivas sanitarias, suele conllevar una pérdida de valor nutritivo y unas características sensoriales poco atractivas. El manejo de los alimentos para transformarlos en productos listos pare el consumo implica la asunción de ciertos riesgos microbiológicos, mayores que los asumidos en productos sin manipulación. ¿Cómo responder ante el incremento de riesgos y peligros que se ciernen sobre los “nuevos alimentos”? Una alternativa que ha ganado correligionarios es la microbiología predictiva. Es una herramienta útil, a disposición de cualquier entidad interesada en los alimentos, que predice, mediante modelos matemáticos, el comportamiento microbiano bajo ciertas condiciones. La mayoría de los modelos disponibles predicen valores únicos (a cada valor de la variable independiente le corresponde un único valor de la dependiente); han demostrado su eficacia durante décadas a base de tratamientos sobredimensionados para salvaguardar la calidad microbiológica de los alimentos y predicen una media, sin considerar la variabilidad. Considérese un valor de reducción decimal, D, de 1 minuto. Si el producto contiene 103 ufc/g, un envase de 1 Kg que haya pasado por un tratamiento 6D, contendrá 1 célula viable. Hasta aquí la predicción de un modelo clásico. Ahora piénsese en una producción industrial, miles de envases de 1 Kg/h. ¿Quién puede creerse que en todos ellos habrá 1 microorganismo superviviente? ¿No es más creíble que en unos no quedará ningún viable, en muchos 1, en otros 2, 3 y quizás en los menos 5 ó 6? Los modelos que no consideran la variabilidad microbiana predicen con precisión la tasa de crecimiento pero han fracasado en la predicción de la fase de latencia...
Resumo:
Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model
Resumo:
Given the discrepancy over the optimum levels of employment for Colombia, this research targets both, the national and urban, Non-Accelerating Inflation Rate of Unemployment (NAIRU) for the Colombian markets -- In doing so, there is a strong pertinence in estimating the constant NAIRU through raw and minimally altered data and providing the reader with a complete brief of the theory in which the model is founded -- The introduction of supply shocks is considered to attain improved estimations and a more reliable assessment of the NAIRU to those that have previously been attempted -- The backbone of the analysis is conducted through the relationship established by the Phillips curve from 2001 until 2015
Resumo:
Aunque existen numerosos trabajos que identifican sus principales características y modo de funcionamiento, el estudio de las organizaciones virtuales adolece de una carencia de modelos matemáticos que reflejen su comportamiento de un modo cuantitativo. En este sentido, a lo largo del presente trabajo se tratará de poner de manifiesto las similitudes existentes entre el funcionamiento de las organizaciones virtuales y el de las redes neuronales (SOM, SelfOrganizing Maps). El objetivo es sentar las bases para proponer este tipo de técnica estadística como herramienta para la formulación de modelos sobre organizaciones virtuales. Se plantearán una serie de argumentos de plausibilidad, dejando a investigaciones posteriores la verificación rigurosa de esta propuesta.
Resumo:
La dinámica demográfica ha sido modelada con ecuaciones diferenciales desde que Malthus comenzó sus estudios hace más de doscientos años atrás. Los modelos convencionales siempre tratan relaciones entre especies como estáticas, denotando sólo su dependencia durante un período fijo del tiempo, aunque sea conocido que las relaciones entre especies pueden cambiar con el tiempo. Aquí proponemos un modelo para la dinámica demográfica que incorpora la evolución con el tiempo de las interacciones entre especies. Este modelo incluye una amplia gama de interacciones, de depredador-presa a las relaciones mutualistas, ya sea obligada o facultativa. El mecanismo que describimos permite la transición de una clase de relación entre especies a algún otro, según algunos parámetros externos fijados por el contexto. Estas transiciones podrían evitar la extinción de una de las especies, si esto termina por depender demasiado del ambiente o su relación con las otras especies.
Resumo:
Ha sido ampliamente reconocida la importancia de la predicción en la toma de decisiones, y se han encontrado evidencias de que uno de los métodos más efectivos es el ajuste de los pronósticos obtenidos a partir de modelos matemáticos usando juicios informados. No obstante, existe una amplia cantidad de factores que pueden afectar la calidad y credibilidad de las predicciones; en este trabajo se examinan aquellos factores relacionados con las políticas organizacionales, y se proponen varias estrategias para su mitigación.
Resumo:
Reinforced concrete creep is a phenomenon of great importance. Despite being appointed as the main cause of several pathologies, its effects are yet considered in a simplified way by the structural designers. In addition to studying the phenomenon in reinforced concrete structures and its current account used in the structural analysis, this paper compares creep strains at simply supported reinforced concrete beams in analytical and in experimental forms with the finite element method (FEM) simulation results. The strains and deflections obtained through the analytical form were calculated with the Brazilian code NBR 6118 (2014) recommendations and the simplified method from CEB-FIP 90 and the experimental results were extracted from tests available in the literature. Finite element simulations are performed using ANSYS Workbench software, using its 3D SOLID 186 elements and the structure symmetry. Analyzes of convergence using 2D PLANE 183 elements are held as well. At the end, it is concluded that FEM analyses are quantitative and qualitative efficient for the estimation of this non-linearity and that the method utilized to obtain the creep coefficients values is sufficiently accurate.