848 resultados para Mathematical formulation
Resumo:
The main objective of this PhD was to further develop Bayesian spatio-temporal models (specifically the Conditional Autoregressive (CAR) class of models), for the analysis of sparse disease outcomes such as birth defects. The motivation for the thesis arose from problems encountered when analyzing a large birth defect registry in New South Wales. The specific components and related research objectives of the thesis were developed from gaps in the literature on current formulations of the CAR model, and health service planning requirements. Data from a large probabilistically-linked database from 1990 to 2004, consisting of fields from two separate registries: the Birth Defect Registry (BDR) and Midwives Data Collection (MDC) were used in the analyses in this thesis. The main objective was split into smaller goals. The first goal was to determine how the specification of the neighbourhood weight matrix will affect the smoothing properties of the CAR model, and this is the focus of chapter 6. Secondly, I hoped to evaluate the usefulness of incorporating a zero-inflated Poisson (ZIP) component as well as a shared-component model in terms of modeling a sparse outcome, and this is carried out in chapter 7. The third goal was to identify optimal sampling and sample size schemes designed to select individual level data for a hybrid ecological spatial model, and this is done in chapter 8. Finally, I wanted to put together the earlier improvements to the CAR model, and along with demographic projections, provide forecasts for birth defects at the SLA level. Chapter 9 describes how this is done. For the first objective, I examined a series of neighbourhood weight matrices, and showed how smoothing the relative risk estimates according to similarity by an important covariate (i.e. maternal age) helped improve the model’s ability to recover the underlying risk, as compared to the traditional adjacency (specifically the Queen) method of applying weights. Next, to address the sparseness and excess zeros commonly encountered in the analysis of rare outcomes such as birth defects, I compared a few models, including an extension of the usual Poisson model to encompass excess zeros in the data. This was achieved via a mixture model, which also encompassed the shared component model to improve on the estimation of sparse counts through borrowing strength across a shared component (e.g. latent risk factor/s) with the referent outcome (caesarean section was used in this example). Using the Deviance Information Criteria (DIC), I showed how the proposed model performed better than the usual models, but only when both outcomes shared a strong spatial correlation. The next objective involved identifying the optimal sampling and sample size strategy for incorporating individual-level data with areal covariates in a hybrid study design. I performed extensive simulation studies, evaluating thirteen different sampling schemes along with variations in sample size. This was done in the context of an ecological regression model that incorporated spatial correlation in the outcomes, as well as accommodating both individual and areal measures of covariates. Using the Average Mean Squared Error (AMSE), I showed how a simple random sample of 20% of the SLAs, followed by selecting all cases in the SLAs chosen, along with an equal number of controls, provided the lowest AMSE. The final objective involved combining the improved spatio-temporal CAR model with population (i.e. women) forecasts, to provide 30-year annual estimates of birth defects at the Statistical Local Area (SLA) level in New South Wales, Australia. The projections were illustrated using sixteen different SLAs, representing the various areal measures of socio-economic status and remoteness. A sensitivity analysis of the assumptions used in the projection was also undertaken. By the end of the thesis, I will show how challenges in the spatial analysis of rare diseases such as birth defects can be addressed, by specifically formulating the neighbourhood weight matrix to smooth according to a key covariate (i.e. maternal age), incorporating a ZIP component to model excess zeros in outcomes and borrowing strength from a referent outcome (i.e. caesarean counts). An efficient strategy to sample individual-level data and sample size considerations for rare disease will also be presented. Finally, projections in birth defect categories at the SLA level will be made.
Resumo:
This paper combines experimental data with simple mathematical models to investigate the influence of spray formulation type and leaf character (wettability) on shatter, bounce and adhesion of droplets impacting with cotton, rice and wheat leaves. Impaction criteria that allow for different angles of the leaf surface and the droplet impact trajectory are presented; their predictions are based on whether combinations of droplet size and velocity lie above or below bounce and shatter boundaries. In the experimental component, real leaves are used, with all their inherent natural variability. Further, commercial agricultural spray nozzles are employed, resulting in a range of droplet characteristics. Given this natural variability, there is broad agreement between the data and predictions. As predicted, the shatter of droplets was found to increase as droplet size and velocity increased, and the surface became harder to wet. Bouncing of droplets occurred most frequently on hard to wet surfaces with high surface tension mixtures. On the other hand, a number of small droplets with low impact velocity were observed to bounce when predicted to lie well within the adhering regime. We believe this discrepancy between the predictions and experimental data could be due to air layer effects that were not taken into account in the current bounce equations. Other discrepancies between experiment and theory are thought to be due to the current assumption of a dry impact surface, whereas, in practice, the leaf surfaces became increasingly covered with fluid throughout the spray test runs.
Resumo:
The formulation of higher order structural models and their discretization using the finite element method is difficult owing to their complexity, especially in the presence of non-linearities. In this work a new algorithm for automating the formulation and assembly of hyperelastic higher-order structural finite elements is developed. A hierarchic series of kinematic models is proposed for modeling structures with special geometries and the algorithm is formulated to automate the study of this class of higher order structural models. The algorithm developed in this work sidesteps the need for an explicit derivation of the governing equations for the individual kinematic modes. Using a novel procedure involving a nodal degree-of-freedom based automatic assembly algorithm, automatic differentiation and higher dimensional quadrature, the relevant finite element matrices are directly computed from the variational statement of elasticity and the higher order kinematic model. Another significant feature of the proposed algorithm is that natural boundary conditions are implicitly handled for arbitrary higher order kinematic models. The validity algorithm is illustrated with examples involving linear elasticity and hyperelasticity. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
Classical mechanics is deceptively simple. It is surprisingly easy to get the right answer with fallacious reasoning or without real understanding. To address this problem we use computational techniques to communicate a deeper understanding of Classical Mechanics. Computational algorithms are used to express the methods used in the analysis of dynamical phenomena. Expressing the methods in a computer language forces them to be unambiguous and computationally effective. The task of formulating a method as a computer-executable program and debugging that program is a powerful exercise in the learning process. Also, once formalized procedurally, a mathematical idea becomes a tool that can be used directly to compute results.
Resumo:
The purpose of this study was to mathematically characterize the effects of defined experimental parameters (probe speed and the ratio of the probe diameter to the diameter of sample container) on the textural/mechanical properties of model gel systems. In addition, this study examined the applicability of dimensional analysis for the rheological interpretation of textural data in terms of shear stress and rate of shear. Aqueous gels (pH 7) were prepared containing 15% w/w poly(methylvinylether-co-maleic anhydride) and poly(vinylpyrrolidone) (PVP) (0, 3, 6, or 9% w/w). Texture profile analysis (TPA) was performed using a Stable Micro Systems texture analyzer (model TA-XT 2; Surrey, UK) in which an analytical probe was twice compressed into each formulation to a defined depth (15 mm) and at defined rates (1, 3, 5, 8, and 10 mm s-1), allowing a delay period (15 s) between the end of the first and beginning of the second compressions. Flow rheograms were performed using a Carri-Med CSL2-100 rheometer (TA Instruments, Surrey, UK) with parallel plate geometry under controlled shearing stresses at 20.0°?±?0.1°C. All formulations exhibited pseudoplastic flow with no thixotropy. Increasing concentrations of PVP significantly increased formulation hardness, compressibility, adhesiveness, and consistency. Increased hardness, compressibility, and consistency were ascribed to enhanced polymeric entanglements, thereby increasing the resistance to deformation. Increasing probe speed increased formulation hardness in a linear manner, because of the effects of probe speed on probe displacement and surface area. The relationship between formulation hardness and probe displacement was linear and was dependent on probe speed. Furthermore, the proportionality constant (gel strength) increased as a function of PVP concentration. The relationship between formulation hardness and diameter ratio was biphasic and was statistically defined by two linear relationships relating to diameter ratios from 0 to 0.4 and from 0.4 to 0.563. The dramatically increased hardness, associated with diameter ratios in excess of 0.4, was accredited to boundary effects, that is, the effect of the container wall on product flow. Using dimensional analysis, the hardness and probe displacement in TPA were mathematically transformed into corresponding rheological parameters, namely shearing stress and rate of shear, thereby allowing the application of the power law (??=?k?n) to textural data. Importantly, the consistencies (k) of the formulations, calculated using transformed textural data, were statistically similar to those obtained using flow rheometry. In conclusion, this study has, firstly, characterized the relationships between textural data and two key instrumental parameters in TPA and, secondly, described a method by which rheological information may be derived using this technique. This will enable a greater application of TPA for the rheological characterization of pharmaceutical gels and, in addition, will enable efficient interpretation of textural data under different experimental parameters.
Resumo:
We consider the problem of scattering of time-harmonic acoustic waves by an unbounded sound-soft rough surface. Recently, a Brakhage Werner type integral equation formulation of this problem has been proposed, based on an ansatz as a combined single- and double-layer potential, but replacing the usual fundamental solution of the Helmholtz equation with an appropriate half-space Green's function. Moreover, it has been shown in the three-dimensional case that this integral equation is uniquely solvable in the space L-2 (Gamma) when the scattering surface G does not differ too much from a plane. In this paper, we show that this integral equation is uniquely solvable with no restriction on the surface elevation or slope. Moreover, we construct explicit bounds on the inverse of the associated boundary integral operator, as a function of the wave number, the parameter coupling the single- and double-layer potentials, and the maximum surface slope. These bounds show that the norm of the inverse operator is bounded uniformly in the wave number, kappa, for kappa > 0, if the coupling parameter h is chosen proportional to the wave number. In the case when G is a plane, we show that the choice eta = kappa/2 is nearly optimal in terms of minimizing the condition number.
Resumo:
The long time–evolution of disturbances to slowly–varying solutions of partial differential equations is subject to the adiabatic invariance of the wave action. Generally, this approximate conservation law is obtained under the assumption that the partial differential equations are derived from a variational principle or have a canonical Hamiltonian structure. Here, the wave action conservation is examined for equations that possess a non–canonical (Poisson) Hamiltonian structure. The linear evolution of disturbances in the form of slowly varying wavetrains is studied using a WKB expansion. The properties of the original Hamiltonian system strongly constrain the linear equations that are derived, and this is shown to lead to the adiabatic invariance of a wave action. The connection between this (approximate) invariance and the (exact) conservation laws of pseudo–energy and pseudomomentum that exist when the basic solution is exactly time and space independent is discussed. An evolution equation for the slowly varying phase of the wavetrain is also derived and related to Berry's phase.
Resumo:
Cholesterol is one of the key constituents for maintaining the cellular membrane and thus the integrity of the cell itself. In contrast high levels of cholesterol in the blood are known to be a major risk factor in the development of cardiovascular disease. We formulate a deterministic nonlinear ordinary differential equation model of the sterol regulatory element binding protein 2 (SREBP-2) cholesterol genetic regulatory pathway in an hepatocyte. The mathematical model includes a description of genetic transcription by SREBP-2 which is subsequently translated to mRNA leading to the formation of 3-hydroxy-3-methylglutaryl coenzyme A reductase (HMGCR), a main precursor of cholesterol synthesis. Cholesterol synthesis subsequently leads to the regulation of SREBP-2 via a negative feedback formulation. Parameterised with data from the literature, the model is used to understand how SREBP-2 transcription and regulation affects cellular cholesterol concentration. Model stability analysis shows that the only positive steady-state of the system exhibits purely oscillatory, damped oscillatory or monotic behaviour under certain parameter conditions. In light of our findings we postulate how cholesterol homestasis is maintained within the cell and the advantages of our model formulation are discussed with respect to other models of genetic regulation within the literature.
Resumo:
The third law of thermodynamics is formulated precisely: all points of the state space of zero temperature I""(0) are physically adiabatically inaccessible from the state space of a simple system. In addition to implying the unattainability of absolute zero in finite time (or ""by a finite number of operations""), it admits as corollary, under a continuity assumption, that all points of I""(0) are adiabatically equivalent. We argue that the third law is universally valid for all macroscopic systems which obey the laws of quantum mechanics and/or quantum field theory. We also briefly discuss why a precise formulation of the third law for black holes remains an open problem.
Resumo:
The conventional Newton and fast decoupled power flow (FDPF) methods have been considered inadequate to obtain the maximum loading point of power systems due to ill-conditioning problems at and near this critical point. It is well known that the PV and Q-theta decoupling assumptions of the fast decoupled power flow formulation no longer hold in the vicinity of the critical point. Moreover, the Jacobian matrix of the Newton method becomes singular at this point. However, the maximum loading point can be efficiently computed through parameterization techniques of continuation methods. In this paper it is shown that by using either theta or V as a parameter, the new fast decoupled power flow versions (XB and BX) become adequate for the computation of the maximum loading point only with a few small modifications. The possible use of reactive power injection in a selected PV bus (Q(PV)) as continuation parameter (mu) for the computation of the maximum loading point is also shown. A trivial secant predictor, the modified zero-order polynomial which uses the current solution and a fixed increment in the parameter (V, theta, or mu) as an estimate for the next solution, is used in predictor step. These new versions are compared to each other with the purpose of pointing out their features, as well as the influence of reactive power and transformer tap limits. The results obtained with the new approach for the IEEE test systems (14, 30, 57 and 118 buses) are presented and discussed in the companion paper. The results show that the characteristics of the conventional method are enhanced and the region of convergence around the singular solution is enlarged. In addition, it is shown that parameters can be switched during the tracing process in order to efficiently determine all the PV curve points with few iterations. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Recently, the Hamilton-Jacobi formulation for first-order constrained systems has been developed. In such formalism the equations of motion are written as total differential equations in many variables. We generalize the Hamilton-Jacobi formulation for singular systems with second-order Lagrangians and apply this new formulation to Podolsky electrodynamics, comparing with the results obtained through Dirac's method.
Resumo:
Most consumers consider the fat of chicken meat undesirable for a healthy diet, due to the high levels of saturated fatty acids and cholesterol. The purpose of this experiment was to investigate the influence of changes in dietary metabolizable energy level, associated with a proportional nutrient density variation, on broiler chickens performance and on the lipid composition of meat. Males and females Cobb 500 broilers were evaluated separately. Performance evaluation followed a completely randomized design with factorial 6x3 arrangement - six energy levels (2,800, 2,900, 3,000, 3,100, 3,200 and 3,300 kcal/kg) and three slaughter ages (42, 49 and 56 days). Response surface methodology was used to establish a mathematical model to explain live weight, feed intake and feed conversion behavior. Total lipids and cholesterol were determined in skinned breast meat and in thigh meat, with and without skin. For lipid composition analysis, a 3x3x2 factorial arrangement in a completely randomized design - three ration’s metabolizable energy levels (2,800, 3,000 and 3,300 kcal/kg), three slaughter ages (42, 49 and 56 days) and two sexes - was used. The reduction in the diet metabolizable energy up to close to 3,000 kcal/kg did not affect live weight but, below this value, the live weight decreased. Feed intake was lower when the dietary energy level was higher. Feed conversion was favored in a direct proportion to the increase of the energy level of the diet. The performance of all birds was within the range considered appropriate for the lineage. Breast meat had less total lipids and cholesterol than thigh meat. Thigh with skin had more than the double of total lipids of skinned thigh, but the cholesterol content did not differ with the removal of the skin, suggesting that cholesterol content is not associated with the subcutaneous fat. Intramuscular fat content was lower in the meat from birds fed diets with lower energy level. These results may help to define the most appropriate nutritional management. Despite the decrease in bird’s productive performance, the restriction of energy in broiler chickens feed may be a viable alternative, if the consumers are willing to pay more for meat with less fat.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.