923 resultados para mathematical equation correction approach
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Theoretical approaches to forensic entomology: I. Mathematical model of postfeeding larval dispersal
Resumo:
An overall theoretical approach to model phenomena of interest for forensic entomology is advanced. Efforts are concentrated in identifying biological attributes at the individual, population and community of the arthropod fauna associated with decomposing human corpses and then incorporating these attributes into mathematical models. In particular in this paper a diffusion model of dispersal of post feeding larvae is described for blowflies, which are the most common insects associated with corpses.
Resumo:
The application of the Restricted Dynamics Approach in nuclear theory, based on the approximate solution of many-particle Schrödinger equation, which accounts for all conservation laws in many-nucleon system, is discussed. The Strictly Restricted Dynamics Model is used for the evaluation of binding energies, level schemes, E2 and Ml transition probabilities as well as the electric quadrupole and magnetic dipole momenta of light a-cluster type nuclei in the region 4 ≤ A ≤ 40. The parameters of effective nucleonnucleon interaction potential are evaluated from the ground state binding energies of doubly magic nuclei 4He, 16O and 40Ca.
Resumo:
We consider a fully model-based approach for the analysis of distance sampling data. Distance sampling has been widely used to estimate abundance (or density) of animals or plants in a spatially explicit study area. There is, however, no readily available method of making statistical inference on the relationships between abundance and environmental covariates. Spatial Poisson process likelihoods can be used to simultaneously estimate detection and intensity parameters by modeling distance sampling data as a thinned spatial point process. A model-based spatial approach to distance sampling data has three main benefits: it allows complex and opportunistic transect designs to be employed, it allows estimation of abundance in small subregions, and it provides a framework to assess the effects of habitat or experimental manipulation on density. We demonstrate the model-based methodology with a small simulation study and analysis of the Dubbo weed data set. In addition, a simple ad hoc method for handling overdispersion is also proposed. The simulation study showed that the model-based approach compared favorably to conventional distance sampling methods for abundance estimation. In addition, the overdispersion correction performed adequately when the number of transects was high. Analysis of the Dubbo data set indicated a transect effect on abundance via Akaike’s information criterion model selection. Further goodness-of-fit analysis, however, indicated some potential confounding of intensity with the detection function.
Resumo:
We define the Virasoro algebra action on imaginary Verma modules for affine and construct an analogue of the Knizhnik-Zamolodchikov equation in the operator form. Both these results are based on a realization of imaginary Verma modules in terms of sums of partial differential operators.
Resumo:
Purpose - The purpose of this paper is to develop an efficient numerical algorithm for the self-consistent solution of Schrodinger and Poisson equations in one-dimensional systems. The goal is to compute the charge-control and capacitance-voltage characteristics of quantum wire transistors. Design/methodology/approach - The paper presents a numerical formulation employing a non-uniform finite difference discretization scheme, in which the wavefunctions and electronic energy levels are obtained by solving the Schrodinger equation through the split-operator method while a relaxation method in the FTCS scheme ("Forward Time Centered Space") is used to solve the two-dimensional Poisson equation. Findings - The numerical model is validated by taking previously published results as a benchmark and then applying them to yield the charge-control characteristics and the capacitance-voltage relationship for a split-gate quantum wire device. Originality/value - The paper helps to fulfill the need for C-V models of quantum wire device. To do so, the authors implemented a straightforward calculation method for the two-dimensional electronic carrier density n(x,y). The formulation reduces the computational procedure to a much simpler problem, similar to the one-dimensional quantization case, significantly diminishing running time.
Resumo:
This work used the colloidal theory to describe forces and energy interactions of colloidal complexes in the water and those formed during filtration run in direct filtration. Many interactions of particle energy profiles between colloidal surfaces for three geometries are presented here in: spherical, plate and cylindrical; and four surface interactions arrangements: two cylinders, two spheres, two plates and a sphere and a plate. Two different situations were analyzed, before and after electrostatic destabilization by action of the alum sulfate as coagulant in water studies samples prepared with kaolin. In the case were used mathematical modeling by extended DLVO theory (from the names: Derjarguin-Landau-Verwey-Overbeek) or XDLVO, which include traditional approach of the electric double layer (EDL), surfaces attraction forces or London-van der Waals (LvdW), esteric forces and hydrophobic forces, additionally considering another forces in colloidal system, like molecular repulsion or Born Repulsion and Acid-Base (AB) chemical function forces from Lewis.
Resumo:
The present work shows a novel fractal dimension method for shape analysis. The proposed technique extracts descriptors from a shape by applying a multi-scale approach to the calculus of the fractal dimension. The fractal dimension is estimated by applying the curvature scale-space technique to the original shape. By applying a multi-scale transform to the calculus, we obtain a set of descriptors which is capable of describing the shape under investigation with high precision. We validate the computed descriptors in a classification process. The results demonstrate that the novel technique provides highly reliable descriptors, confirming the efficiency of the proposed method. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4757226]
Resumo:
This paper studies the average control problem of discrete-time Markov Decision Processes (MDPs for short) with general state space, Feller transition probabilities, and possibly non-compact control constraint sets A(x). Two hypotheses are considered: either the cost function c is strictly unbounded or the multifunctions A(r)(x) = {a is an element of A(x) : c(x, a) <= r} are upper-semicontinuous and compact-valued for each real r. For these two cases we provide new results for the existence of a solution to the average-cost optimality equality and inequality using the vanishing discount approach. We also study the convergence of the policy iteration approach under these conditions. It should be pointed out that we do not make any assumptions regarding the convergence and the continuity of the limit function generated by the sequence of relative difference of the alpha-discounted value functions and the Poisson equations as often encountered in the literature. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
In the past decades, all of the efforts at quantifying systems complexity with a general tool has usually relied on using Shannon's classical information framework to address the disorder of the system through the Boltzmann-Gibbs-Shannon entropy, or one of its extensions. However, in recent years, there were some attempts to tackle the quantification of algorithmic complexities in quantum systems based on the Kolmogorov algorithmic complexity, obtaining some discrepant results against the classical approach. Therefore, an approach to the complexity measure is proposed here, using the quantum information formalism, taking advantage of the generality of the classical-based complexities, and being capable of expressing these systems' complexity on other framework than its algorithmic counterparts. To do so, the Shiner-Davison-Landsberg (SDL) complexity framework is considered jointly with linear entropy for the density operators representing the analyzed systems formalism along with the tangle for the entanglement measure. The proposed measure is then applied in a family of maximally entangled mixed state.
Resumo:
The scope of this study was to estimate calibrated values for dietary data obtained by the Food Frequency Questionnaire for Adolescents (FFQA) and illustrate the effect of this approach on food consumption data. The adolescents were assessed on two occasions, with an average interval of twelve months. In 2004, 393 adolescents participated, and 289 were then reassessed in 2005. Dietary data obtained by the FFQA were calibrated using the regression coefficients estimated from the average of two 24-hour recalls (24HR) of the subsample. The calibrated values were similar to the the 24HR reference measurement in the subsample. In 2004 and 2005 a significant difference was observed between the average consumption levels of the FFQA before and after calibration for all nutrients. With the use of calibrated data the proportion of schoolchildren who had fiber intake below the recommended level increased. Therefore, it is seen that calibrated data can be used to obtain adjusted associations due to reclassification of subjects within the predetermined categories.
Resumo:
We have performed multicanonical simulations to study the critical behavior of the two-dimensional Ising model with dipole interactions. This study concerns the thermodynamic phase transitions in the range of the interaction delta where the phase characterized by striped configurations of width h = 1 is observed. Controversial results obtained from local update algorithms have been reported for this region, including the claimed existence of a second-order phase transition line that becomes first order above a tricritical point located somewhere between delta = 0.85 and 1. Our analysis relies on the complex partition function zeros obtained with high statistics from multicanonical simulations. Finite size scaling relations for the leading partition function zeros yield critical exponents. that are clearly consistent with a single second-order phase transition line, thus excluding such a tricritical point in that region of the phase diagram. This conclusion is further supported by analysis of the specific heat and susceptibility of the orientational order parameter.
Resumo:
Transplantation brings hope for many patients. A multidisciplinary approach on this field aims at creating biologically functional tissues to be used as implants and prostheses. The freeze-drying process allows the fundamental properties of these materials to be preserved, making future manipulation and storage easier. Optimizing a freeze-drying cycle is of great importance since it aims at reducing process costs while increasing product quality of this time-and-energy-consuming process. Mathematical modeling comes as a tool to help a better understanding of the process variables behavior and consequently it helps optimization studies. Freeze-drying microscopy is a technique usually applied to determine critical temperatures of liquid formulations. It has been used in this work to determine the sublimation rates of a biological tissue freeze-drying. The sublimation rates were measured from the speed of the moving interface between the dried and the frozen layer under 21.33, 42.66 and 63.99 Pa. The studied variables were used in a theoretical model to simulate various temperature profiles of the freeze-drying process. Good agreement between the experimental and the simulated results was found.
Resumo:
This work describes a methodology to simulate free surface incompressible multiphase flows. This novel methodology allows the simulation of multiphase flows with an arbitrary number of phases, each of them having different densities and viscosities. Surface and interfacial tension effects are also included. The numerical technique is based on the GENSMAC front-tracking method. The velocity field is computed using a finite-difference discretization of a modification of the NavierStokes equations. These equations together with the continuity equation are solved for the two-dimensional multiphase flows, with different densities and viscosities in the different phases. The governing equations are solved on a regular Eulerian grid, and a Lagrangian mesh is employed to track free surfaces and interfaces. The method is validated by comparing numerical with analytic results for a number of simple problems; it was also employed to simulate complex problems for which no analytic solutions are available. The method presented in this paper has been shown to be robust and computationally efficient. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
We experimentally revisit a technique of low-cost multiparameter monitor for optical performance monitoring based on low frequency polarization modulation. A simplified calibration procedure, which significantly reduces the mathematical complexity and processing effort is proposed. Validation is achieved by carrying out relative optical power, wavelength, and differential group delay measurements. (C) 2012 Wiley Periodicals, Inc. Microwave Opt Technol Lett 54:18201824, 2012; View this article online at wileyonlinelibrary.com. DOI 10.1002/mop.26956