905 resultados para Mathematical prediction.
Resumo:
The accurate identification of the nitrogen content in plants is extremely important since it involves economic aspects and environmental impacts, Several experimental tests have been carried out to obtain characteristics and parameters associated with the health of plants and its growing. The nitrogen content identification in plants involves a lot of non-linear parameters and complexes mathematical models. This paper describes a novel approach for identification of nitrogen content thought SPAD index using artificial neural networks (ANN). The network acts as identifier of relationships among, crop varieties, fertilizer treatments, type of leaf and nitrogen content in the plants (target). So, nitrogen content can be generalized and estimated and from an input parameter set. This approach can form the basis for development of an accurate real time system to predict nitrogen content in plants.
Resumo:
The cross-section for the scattering of a photon by the Sun's gravitational field, treated as an external field, is computed in the framework of R + R-2 gravity. Using this result, we found that for a photon just grazing the Sun's surface the deflection is 1.75 which is exactly the same as that given by Einstein's theory. An explanation for this pseudo-paradox is provided.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Forecasting, for obvious reasons, often become the most important goal to be achieved. For spatially extended systems (e.g. atmospheric system) where the local nonlinearities lead to the most unpredictable chaotic evolution, it is highly desirable to have a simple diagnostic tool to identify regions of predictable behaviour. In this paper, we discuss the use of the bred vector (BV) dimension, a recently introduced statistics, to identify the regimes where a finite time forecast is feasible. Using the tools from dynamical systems theory and Bayesian modelling, we show the finite time predictability in two-dimensional coupled map lattices in the regions of low BV dimension. © Indian Academy of Sciences.
Resumo:
Several systems are currently tested in order to obtain a feasible and safe method for automation and control of grinding process. This work aims to predict the surface roughness of the parts of SAE 1020 steel ground in a surface grinding machine. Acoustic emission and electrical power signals were acquired by a commercial data acquisition system. The former from a fixed sensor placed near the workpiece and the latter from the electric induction motor that drives the grinding wheel. Both signals were digitally processed through known statistics, which with the depth of cut composed three data sets implemented to the artificial neural networks. The neural network through its mathematical logical system interpreted the signals and successful predicted the workpiece roughness. The results from the neural networks were compared to the roughness values taken from the worpieces, showing high efficiency and applicability on monitoring and controlling the grinding process. Also, a comparison among the three data sets was carried out.
Resumo:
This work used the colloidal theory to describe forces and energy interactions of colloidal complexes in the water and those formed during filtration run in direct filtration. Many interactions of particle energy profiles between colloidal surfaces for three geometries are presented here in: spherical, plate and cylindrical; and four surface interactions arrangements: two cylinders, two spheres, two plates and a sphere and a plate. Two different situations were analyzed, before and after electrostatic destabilization by action of the alum sulfate as coagulant in water studies samples prepared with kaolin. In the case were used mathematical modeling by extended DLVO theory (from the names: Derjarguin-Landau-Verwey-Overbeek) or XDLVO, which include traditional approach of the electric double layer (EDL), surfaces attraction forces or London-van der Waals (LvdW), esteric forces and hydrophobic forces, additionally considering another forces in colloidal system, like molecular repulsion or Born Repulsion and Acid-Base (AB) chemical function forces from Lewis.
Resumo:
[EN] Background: Cervical cancer is treated mainly by surgery and radiotherapy. Toxicity due to radiation is a limiting factor for treatment success. Determination of lymphocyte radiosensitivity by radio-induced apoptosis arises as a possible method for predictive test development. The aim of this study was to analyze radio-induced apoptosis of peripheral blood lymphocytes. Methods: Ninety four consecutive patients suffering from cervical carcinoma, diagnosed and treated in our institution, and four healthy controls were included in the study. Toxicity was evaluated using the Lent-Soma scale. Peripheral blood lymphocytes were isolated and irradiated at 0, 1, 2 and 8 Gy during 24, 48 and 72 hours. Apoptosis was measured by flow cytometry using annexin V/propidium iodide to determine early and late apoptosis. Lymphocytes were marked with CD45 APC-conjugated monoclonal antibody. Results: Radiation-induced apoptosis (RIA) increased with radiation dose and time of incubation. Data strongly fitted to a semi logarithmic model as follows: RIA = βln(Gy) + α. This mathematical model was defined by two constants: α, is the origin of the curve in the Y axis and determines the percentage of spontaneous cell death and β, is the slope of the curve and determines the percentage of cell death induced at a determined radiation dose (β = ΔRIA/Δln(Gy)). Higher β values (increased rate of RIA at given radiation doses) were observed in patients with low sexual toxicity (Exp(B) = 0.83, C.I. 95% (0.73-0.95), p = 0.007; Exp(B) = 0.88, C.I. 95% (0.82-0.94), p = 0.001; Exp(B) = 0.93, C.I. 95% (0.88-0.99), p = 0.026 for 24, 48 and 72 hours respectively). This relation was also found with rectal (Exp(B) = 0.89, C.I. 95% (0.81-0.98), p = 0.026; Exp(B) = 0.95, C.I. 95% (0.91-0.98), p = 0.013 for 48 and 72 hours respectively) and urinary (Exp(B) = 0.83, C.I. 95% (0.71-0.97), p = 0.021 for 24 hours) toxicity. Conclusion: Radiation induced apoptosis at different time points and radiation doses fitted to a semi logarithmic model defined by a mathematical equation that gives an individual value of radiosensitivity and could predict late toxicity due to radiotherapy. Other prospective studies with higher number of patients are needed to validate these results.
Resumo:
OBJECTIVES: Treatment as prevention depends on retaining HIV-infected patients in care. We investigated the effect on HIV transmission of bringing patients lost to follow up (LTFU) back into care. DESIGN: Mathematical model. METHODS: Stochastic mathematical model of cohorts of 1000 HIV-infected patients on antiretroviral therapy (ART), based on data from two clinics in Lilongwe, Malawi. We calculated cohort viral load (CVL; sum of individual mean viral loads each year) and used a mathematical relationship between viral load and transmission probability to estimate the number of new HIV infections. We simulated four scenarios: 'no LTFU' (all patients stay in care); 'no tracing' (patients LTFU are not traced); 'immediate tracing' (after missed clinic appointment); and, 'delayed tracing' (after six months). RESULTS: About 440 of 1000 patients were LTFU over five years. CVL (million copies/ml per 1000 patients) were 3.7 (95% prediction interval [PrI] 2.9-4.9) for no LTFU, 8.6 (95% PrI 7.3-10.0) for no tracing, 7.7 (95% PrI 6.2-9.1) for immediate, and 8.0 (95% PrI 6.7-9.5) for delayed tracing. Comparing no LTFU with no tracing the number of new infections increased from 33 (95% PrI 29-38) to 54 (95% PrI 47-60) per 1000 patients. Immediate tracing prevented 3.6 (95% PrI -3.3-12.8) and delayed tracing 2.5 (95% PrI -5.8-11.1) new infections per 1000. Immediate tracing was more efficient than delayed tracing: 116 and to 142 tracing efforts, respectively, were needed to prevent one new infection. CONCLUSION: Tracing of patients LTFU enhances the preventive effect of ART, but the number of transmissions prevented is small.
Resumo:
OBJECTIVES Mortality in patients starting antiretroviral therapy (ART) is higher in Malawi and Zambia than in South Africa. We examined whether different monitoring of ART (viral load [VL] in South Africa and CD4 count in Malawi and Zambia) could explain this mortality difference. DESIGN Mathematical modelling study based on data from ART programmes. METHODS We used a stochastic simulation model to study the effect of VL monitoring on mortality over 5 years. In baseline scenario A all parameters were identical between strategies except for more timely and complete detection of treatment failure with VL monitoring. Additional scenarios introduced delays in switching to second-line ART (scenario B) or higher virologic failure rates (due to worse adherence) when monitoring was based on CD4 counts only (scenario C). Results are presented as relative risks (RR) with 95% prediction intervals and percent of observed mortality difference explained. RESULTS RRs comparing VL with CD4 cell count monitoring were 0.94 (0.74-1.03) in scenario A, 0.94 (0.77-1.02) with delayed switching (scenario B) and 0.80 (0.44-1.07) when assuming a 3-times higher rate of failure (scenario C). The observed mortality at 3 years was 10.9% in Malawi and Zambia and 8.6% in South Africa (absolute difference 2.3%). The percentage of the mortality difference explained by VL monitoring ranged from 4% (scenario A) to 32% (scenarios B and C combined, assuming a 3-times higher failure rate). Eleven percent was explained by non-HIV related mortality. CONCLUSIONS VL monitoring reduces mortality moderately when assuming improved adherence and decreased failure rates.
Resumo:
Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos
Resumo:
There has been a recent burst of activity in the atmosphere/ocean sciences community in utilizing stable linear Langevin stochastic models for the unresolved degree of freedom in stochastic climate prediction. Here several idealized models for stochastic climate modeling are introduced and analyzed through unambiguous mathematical theory. This analysis demonstrates the potential need for more sophisticated models beyond stable linear Langevin equations. The new phenomena include the emergence of both unstable linear Langevin stochastic models for the climate mean and the need to incorporate both suitable nonlinear effects and multiplicative noise in stochastic models under appropriate circumstances. The strategy for stochastic climate modeling that emerges from this analysis is illustrated on an idealized example involving truncated barotropic flow on a beta-plane with topography and a mean flow. In this example, the effect of the original 57 degrees of freedom is well represented by a theoretically predicted stochastic model with only 3 degrees of freedom.
Resumo:
Reports 2 and 3 by E. Isaacson, J. J. Stoker, and A. Troesch.
Resumo:
The growth behaviour of the vibrational wear phenomenon known as rail corrugation is investigated analytically and numerically using mathematical models. A simplified feedback model for wear-type rail corrugation that includes a wheel pass time delay is developed with an aim to analytically distil the most critical interaction occurring between the wheel/rail structural dynamics, rolling contact mechanics and rail wear. To this end, a stability analysis on the complete system is performed to determine the growth of wear-type rail corrugations over multiple wheelset passages. This analysis indicates that although the dynamical behaviour of the system is stable for each wheel passage, over multiple wheelset passages, the growth of wear-type corrugations is shown to be the result of instability due to feedback interaction between the three primary components of the model. The corrugations are shown analytically to grow for all realistic railway parameters. From this analysis an analytical expression for the exponential growth rate of corrugations in terms of known parameters is developed. This convenient expression is used to perform a sensitivity analysis to identify critical parameters that most affect corrugation growth. The analytical predictions are shown to compare well with results from a benchmarked time-domain finite element model. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Objective: Inpatient length of stay (LOS) is an important measure of hospital activity, health care resource consumption, and patient acuity. This research work aims at developing an incremental expectation maximization (EM) based learning approach on mixture of experts (ME) system for on-line prediction of LOS. The use of a batchmode learning process in most existing artificial neural networks to predict LOS is unrealistic, as the data become available over time and their pattern change dynamically. In contrast, an on-line process is capable of providing an output whenever a new datum becomes available. This on-the-spot information is therefore more useful and practical for making decisions, especially when one deals with a tremendous amount of data. Methods and material: The proposed approach is illustrated using a real example of gastroenteritis LOS data. The data set was extracted from a retrospective cohort study on all infants born in 1995-1997 and their subsequent admissions for gastroenteritis. The total number of admissions in this data set was n = 692. Linked hospitalization records of the cohort were retrieved retrospectively to derive the outcome measure, patient demographics, and associated co-morbidities information. A comparative study of the incremental learning and the batch-mode learning algorithms is considered. The performances of the learning algorithms are compared based on the mean absolute difference (MAD) between the predictions and the actual LOS, and the proportion of predictions with MAD < 1 day (Prop(MAD < 1)). The significance of the comparison is assessed through a regression analysis. Results: The incremental learning algorithm provides better on-line prediction of LOS when the system has gained sufficient training from more examples (MAD = 1.77 days and Prop(MAD < 1) = 54.3%), compared to that using the batch-mode learning. The regression analysis indicates a significant decrease of MAD (p-value = 0.063) and a significant (p-value = 0.044) increase of Prop(MAD