863 resultados para Biologia - Modelos matemáticos
Resumo:
Actually, Brazil is one of the larger fruit producer worldwide, with most of its production being consumed in nature way or either as juice or pulp. It is important to highlig ht in the fruit productive chain there are a lot lose due mainly to climate reasons, as well as storage, transportation, season, market, etc. It is known that in the pulp and fruit processing industy a yield of 50% (in mass) is usually obtained, with the other part discarded as waste. However, since most this waste has a high nutrient content it can be used to generate added - value products. In this case, drying plays an important role as an alternative process in order to improve these wastes generated by the fruit industry. However, despite the advantage of using this technique in order to improve such wastes, issues as a higher power demand as well as the thermal efficiency limitation should be addressed. Therefore, the control of the main variables in t his drying process is quite important in order to obtain operational conditions to produce a final product with the target specification as well as with a lower power cost. M athematical models can be applied to this process as a tool in order to optimize t he best conditions. The main aim of this work was to evaluate the drying behaviour of a guava industrial pulp waste using a batch system with a convective - tray dryer both experimentally and using mathematical modeling. In the experimental study , the dryin g carried out using a group of trays as well as the power consume were assayed as response to the effects of operational conditions (temperature, drying air flow rate and solid mass). Obtained results allowed observing the most significant variables in the process. On the other hand, the phenomenological mathematical model was validated and allowed to follow the moisture profile as well as the temperature in the solid and gas phases in every tray. Simulation results showed the most favorable procedure to o btain the minimum processing time as well as the lower power demand.
Resumo:
Water injection in oil reservoirs is a recovery technique widely used for oil recovery. However, the injected water contains suspended particles that can be trapped, causing formation damage and injectivity decline. In such cases, it is necessary to stimulate the damaged formation looking forward to restore the injectivity of the injection wells. Injectivity decline causes a major negative impact to the economy of oil production, which is why, it is important to foresee the injectivity behavior for a good waterflooding management project. Mathematical models for injectivity losses allow studying the effect of the injected water quality, also the well and formation characteristics. Therefore, a mathematical model of injectivity losses for perforated injection wells was developed. The scientific novelty of this work relates to the modeling and prediction of injectivity decline in perforated injection wells, considering deep filtration and the formation of external cake in spheroidal perforations. The classic modeling for deep filtration was rewritten using spheroidal coordinates. The solution to the concentration of suspended particles was obtained analytically and the concentration of the retained particles, which cause formation damage, was solved numerically. The acquisition of the solution to impedance assumed a constant injection rate and the modified Darcy´s Law, defined as being the inverse of the normalized injectivity by the inverse of the initial injectivity. Finally, classic linear flow injectivity tests were performed within Berea sandstone samples, and within perforated samples. The parameters of the model, filtration and formation damage coefficients, obtained from the data, were used to verify the proposed modeling. The simulations showed a good fit to the experimental data, it was observed that the ratio between the particle size and pore has a large influence on the behavior of injectivity decline.
Resumo:
Skeletal muscle consists of muscle fiber types that have different physiological and biochemical characteristics. Basically, the muscle fiber can be classified into type I and type II, presenting, among other features, contraction speed and sensitivity to fatigue different for each type of muscle fiber. These fibers coexist in the skeletal muscles and their relative proportions are modulated according to the muscle functionality and the stimulus that is submitted. To identify the different proportions of fiber types in the muscle composition, many studies use biopsy as standard procedure. As the surface electromyography (EMGs) allows to extract information about the recruitment of different motor units, this study is based on the assumption that it is possible to use the EMG to identify different proportions of fiber types in a muscle. The goal of this study was to identify the characteristics of the EMG signals which are able to distinguish, more precisely, different proportions of fiber types. Also was investigated the combination of characteristics using appropriate mathematical models. To achieve the proposed objective, simulated signals were developed with different proportions of motor units recruited and with different signal-to-noise ratios. Thirteen characteristics in function of time and the frequency were extracted from emulated signals. The results for each extracted feature of the signals were submitted to the clustering algorithm k-means to separate the different proportions of motor units recruited on the emulated signals. Mathematical techniques (confusion matrix and analysis of capability) were implemented to select the characteristics able to identify different proportions of muscle fiber types. As a result, the average frequency and median frequency were selected as able to distinguish, with more precision, the proportions of different muscle fiber types. Posteriorly, the features considered most able were analyzed in an associated way through principal component analysis. Were found two principal components of the signals emulated without noise (CP1 and CP2) and two principal components of the noisy signals (CP1 and CP2 ). The first principal components (CP1 and CP1 ) were identified as being able to distinguish different proportions of muscle fiber types. The selected characteristics (median frequency, mean frequency, CP1 and CP1 ) were used to analyze real EMGs signals, comparing sedentary people with physically active people who practice strength training (weight training). The results obtained with the different groups of volunteers show that the physically active people obtained higher values of mean frequency, median frequency and principal components compared with the sedentary people. Moreover, these values decreased with increasing power level for both groups, however, the decline was more accented for the group of physically active people. Based on these results, it is assumed that the volunteers of the physically active group have higher proportions of type II fibers than sedentary people. Finally, based on these results, we can conclude that the selected characteristics were able to distinguish different proportions of muscle fiber types, both for the emulated signals as to the real signals. These characteristics can be used in several studies, for example, to evaluate the progress of people with myopathy and neuromyopathy due to the physiotherapy, and also to analyze the development of athletes to improve their muscle capacity according to their sport. In both cases, the extraction of these characteristics from the surface electromyography signals provides a feedback to the physiotherapist and the coach physical, who can analyze the increase in the proportion of a given type of fiber, as desired in each case.
Resumo:
This study aims to evaluate the uncertainty associated with measurements made by aneroid sphygmomanometer, neonatal electronic balance and electrocautery. Therefore, were performing repeatability tests on all devices for the subsequent execution of normality tests using Shapiro-Wilk; identification of influencing factors that affect the measurement result of each measurement; proposition of mathematical models to calculate the measurement uncertainty associated with measuring evaluated for all equipament and calibration for neonatal electronic balance; evaluation of the measurement uncertainty; and development of a computer program in Java language to systematize the calibration uncertainty of estimates and measurement uncertainty. It was proposed and carried out 23 factorial design for aneroid sphygmomanometer order to investigate the effect of temperature factors, patient and operator and another 32 planning for electrocautery, where it investigated the effects of temperature factors and output electrical power. The expanded uncertainty associated with the measurement of blood pressure significantly reduced the extent of the patient classification tracks. In turn, the expanded uncertainty associated with the mass measurement with neonatal balance indicated a variation of about 1% in the dosage of medication to neonates. Analysis of variance (ANOVA) and the Turkey test indicated significant and indirectly proportional effects of temperature factor in cutting power values and clotting indicated by electrocautery and no significant effect of factors investigated for aneroid sphygmomanometer.
Resumo:
El juego como modo de aprendizaje es algo inherente no sólo al ser humano sino, en general, al reino animal. Para cualquier mamífero el juego constituye la forma de aprendizaje fundamental. A través del juego se aprende a luchar, a defenderse y las normas básicas de convivencia en la manada. Sin embargo en el ser humano, juego y aprendizaje se han ido desligando progresivamente, excepto en las etapas iniciales de crecimiento, en las que los niños siguen aprendiendo los comportamientos más básicos a través de juegos. A medida que vamos avanzando en la escuela, se va abandonando el juego, contraponiendo las actividades lúdicas a las estrictamente relacionadas con el trabajo, con un aprendizaje más costoso. De esta forma al llegar a la etapa universitaria, el juego se ha abandonado por completo como forma de aprendizaje. No es fácil definir lo que es el juego o cuáles son sus características. Tiene una fuerte componente cultural, actividades que unas culturas pueden considerar eminentemente lúdicas, no lo serán en contextos culturales distintos. No obstante, una vez admitida la importancia del juego en el desarrollo de la personalidad, sí podemos establecer algunas de las funciones básicas que el juego desempeña en el ser humano, en relación con el perfeccionamiento y adquisición de habilidades tanto cognitivas como sociales o conductuales. El juego facilita la integración de experiencias en la conducta, contribuye a inhibir conductas no admitidas socialmente y a reforzar aquéllas con una mayor aceptación dentro del marco cultural de referencia. Mejora considerablemente la interacción social y la adquisición de las habilidades básicas necesarias para que se produzca dicha interacción de modo satisfactorio. En el caso de juegos competitivos, enseña a manejar situaciones desfavorables, a soportar y superar la frustración. Tradicionalmente, los juegos se han usado en los niveles iniciales de enseñanza, sin embargo son una poderosa herramienta también en el nivel universitario, especialmente para promover el aprendizaje activo y la adquisición de variadas competencias profesionales. En este proyecto se plantea la elaboración de una herramienta para la creación de simuladores de juegos de mesa con fines didácticos.
Resumo:
We investigate by means of Monte Carlo simulation and finite-size scaling analysis the critical properties of the three dimensional O (5) non-linear σ model and of the antiferromagnetic RP^(2) model, both of them regularized on a lattice. High accuracy estimates are obtained for the critical exponents, universal dimensionless quantities and critical couplings. It is concluded that both models belong to the same universality class, provided that rather non-standard identifications are made for the momentum-space propagator of the RP^(2) model. We have also investigated the phase diagram of the RP^(2) model extended by a second-neighbor interaction. A rich phase diagram is found, where most of the phase transitions are of the first order.
Resumo:
The phase diagram of the simplest approximation to double-exchange systems, the bosonic double-exchange model with antiferromagnetic (AFM) superexchange coupling, is fully worked out by means of Monte Carlo simulations, large-N expansions, and variational mean-field calculations. We find a rich phase diagram, with no first-order phase transitions. The most surprising finding is the existence of a segmentlike ordered phase at low temperature for intermediate AFM coupling which cannot be detected in neutron-scattering experiments. This is signaled by a maximum (a cusp) in the specific heat. Below the phase transition, only short-range ordering would be found in neutron scattering. Researchers looking for a quantum critical point in manganites should be wary of this possibility. Finite-size scaling estimates of critical exponents are presented, although large scaling corrections are present in the reachable lattice sizes.
Resumo:
The phase diagram of the double perovskites of the type Sr_(2-x)La_(x)FeMoO_(6) is analyzed, with and without disorder due to antisites. In addition to an homogeneous half metallic ferrimagnetic phase in the absence of doping and disorder, we find antiferromagnetic phases at large dopings, and other ferrimagnetic phases with lower saturation magnetization, in the presence of disorder.
Resumo:
We study the fluctuation-dissipation relations for a three dimensional Ising spin glass in a magnetic field both in the high temperature phase as well as in the low temperature one. In the region of times simulated we have found that our results support a picture of the low temperature phase with broken replica symmetry, but a droplet behavior cannot be completely excluded.
Resumo:
It is shown that a bosonic formulation of the double-exchange model, one of the classical models for magnetism, generates dynamically a gauge-invariant phase in a finite region of the phase diagram. We use analytical methods, Monte Carlo simulations and finite-size scaling analysis. We study the transition line between that region and the paramagnetic phase. The numerical results show that this transition line belongs to the universality class of the antiferromagnetic RP^(2) model. The fact that one can define a universality class for the antiferromagnetic RP^(2) model, different from the one of the O(N) models, is puzzling and somehow contradicts naive expectations about universality.
Resumo:
Considering the disorder caused in manganites by the substitution Mn→Fe or Ga, we accomplish a systematic study of doped manganites begun in previous papers. To this end, a disordered model is formulated and solved using the variational mean-field technique. The subtle interplay between double exchange, superexchange, and disorder causes similar effects on the dependence of T_(C) on the percentage of Mn substitution in the cases considered. Yet, in La_(2/3)Ca_(1/3)Mn_(1-y)Ga_(y)O_(3) our results suggest a quantum critical point (QCP) for y ≈ 0.1–0.2, associated to the localization of the electronic states of the conduction band. In the case of La_(x)Ca_(x)Mn_(1-y)Fe_(y)O_(3) (with x = 1/3,3/8) no such QCP is expected.
Resumo:
We study the phase diagram of the double exchange model, with antiferromagnetic interactions, in a cubic lattice both at zero and finite temperature. There is a rich variety of magnetic phases, combined with regions where phase separation takes place. We identify phases, intrinsic to the cubic lattice, which are stable for realistic values of the interactions and dopings. Some of these phases break chiral symmetry, leading to unusual features.
Resumo:
A la industria alimentaria se le exigen productos seguros, nutritivos, apetecibles y de uso cómodo y rápido. Aunar todos esos calificativos en un solo alimento es ardua tarea. Valgan dos ejemplos. Un tratamiento conservante intenso, de buenas perspectivas sanitarias, suele conllevar una pérdida de valor nutritivo y unas características sensoriales poco atractivas. El manejo de los alimentos para transformarlos en productos listos pare el consumo implica la asunción de ciertos riesgos microbiológicos, mayores que los asumidos en productos sin manipulación. ¿Cómo responder ante el incremento de riesgos y peligros que se ciernen sobre los “nuevos alimentos”? Una alternativa que ha ganado correligionarios es la microbiología predictiva. Es una herramienta útil, a disposición de cualquier entidad interesada en los alimentos, que predice, mediante modelos matemáticos, el comportamiento microbiano bajo ciertas condiciones. La mayoría de los modelos disponibles predicen valores únicos (a cada valor de la variable independiente le corresponde un único valor de la dependiente); han demostrado su eficacia durante décadas a base de tratamientos sobredimensionados para salvaguardar la calidad microbiológica de los alimentos y predicen una media, sin considerar la variabilidad. Considérese un valor de reducción decimal, D, de 1 minuto. Si el producto contiene 103 ufc/g, un envase de 1 Kg que haya pasado por un tratamiento 6D, contendrá 1 célula viable. Hasta aquí la predicción de un modelo clásico. Ahora piénsese en una producción industrial, miles de envases de 1 Kg/h. ¿Quién puede creerse que en todos ellos habrá 1 microorganismo superviviente? ¿No es más creíble que en unos no quedará ningún viable, en muchos 1, en otros 2, 3 y quizás en los menos 5 ó 6? Los modelos que no consideran la variabilidad microbiana predicen con precisión la tasa de crecimiento pero han fracasado en la predicción de la fase de latencia...
Resumo:
Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model