814 resultados para parabolic problems
Resumo:
The psychiatric and psychosocial evaluation of the heart transplant candidate can identify particular predictors for postoperative problems. These factors, as identified during the comprehensive evaluation phase, provide an assessment of the candidate in context of the proposed transplantation protocol. Previous issues with compliance, substance abuse, and psychosis are clear indictors of postoperative problems. The prolonged waiting list time provides an additional period to evaluate and provide support to patients having a terminal disease who need a heart transplant, and are undergoing prolonged hospitalization. Following transplantation, the patient is faced with additional challenges of a new self-image, multiple concerns, anxiety, and depression. Ultimately, the success of the heart transplantation remains dependent upon the recipient's ability to cope psychologically and comply with the medication regimen. The limited resource of donor hearts and the high emotional and financial cost of heart transplantation lead to an exhaustive effort to select those patients who will benefit from the improved physical health the heart transplant confers.
Resumo:
Variational data assimilation is commonly used in environmental forecasting to estimate the current state of the system from a model forecast and observational data. The assimilation problem can be written simply in the form of a nonlinear least squares optimization problem. However the practical solution of the problem in large systems requires many careful choices to be made in the implementation. In this article we present the theory of variational data assimilation and then discuss in detail how it is implemented in practice. Current solutions and open questions are discussed.
Resumo:
Biological models of an apoptotic process are studied using models describing a system of differential equations derived from reaction kinetics information. The mathematical model is re-formulated in a state-space robust control theory framework where parametric and dynamic uncertainty can be modelled to account for variations naturally occurring in biological processes. We propose to handle the nonlinearities using neural networks.
Resumo:
In this review I summarise some of the most significant advances of the last decade in the analysis and solution of boundary value problems for integrable partial differential equations in two independent variables. These equations arise widely in mathematical physics, and in order to model realistic applications, it is essential to consider bounded domain and inhomogeneous boundary conditions. I focus specifically on a general and widely applicable approach, usually referred to as the Unified Transform or Fokas Transform, that provides a substantial generalisation of the classical Inverse Scattering Transform. This approach preserves the conceptual efficiency and aesthetic appeal of the more classical transform approaches, but presents a distinctive and important difference. While the Inverse Scattering Transform follows the "separation of variables" philosophy, albeit in a nonlinear setting, the Unified Transform is a based on the idea of synthesis, rather than separation, of variables. I will outline the main ideas in the case of linear evolution equations, and then illustrate their generalisation to certain nonlinear cases of particular significance.
Resumo:
This paper describes a fast and reliable method for redistributing a computational mesh in three dimensions which can generate a complex three dimensional mesh without any problems due to mesh tangling. The method relies on a three dimensional implementation of the parabolic Monge–Ampère (PMA) technique, for finding an optimally transported mesh. The method for implementing PMA is described in detail and applied to both static and dynamic mesh redistribution problems, studying both the convergence and the computational cost of the algorithm. The algorithm is applied to a series of problems of increasing complexity. In particular very regular meshes are generated to resolve real meteorological features (derived from a weather forecasting model covering the UK area) in grids with over 2×107 degrees of freedom. The PMA method computes these grids in times commensurate with those required for operational weather forecasting.
Resumo:
We extend extreme learning machine (ELM) classifiers to complex Reproducing Kernel Hilbert Spaces (RKHS) where the input/output variables as well as the optimization variables are complex-valued. A new family of classifiers, called complex-valued ELM (CELM) suitable for complex-valued multiple-input–multiple-output processing is introduced. In the proposed method, the associated Lagrangian is computed using induced RKHS kernels, adopting a Wirtinger calculus approach formulated as a constrained optimization problem similarly to the conventional ELM classifier formulation. When training the CELM, the Karush–Khun–Tuker (KKT) theorem is used to solve the dual optimization problem that consists of satisfying simultaneously smallest training error as well as smallest norm of output weights criteria. The proposed formulation also addresses aspects of quaternary classification within a Clifford algebra context. For 2D complex-valued inputs, user-defined complex-coupled hyper-planes divide the classifier input space into four partitions. For 3D complex-valued inputs, the formulation generates three pairs of complex-coupled hyper-planes through orthogonal projections. The six hyper-planes then divide the 3D space into eight partitions. It is shown that the CELM problem formulation is equivalent to solving six real-valued ELM tasks, which are induced by projecting the chosen complex kernel across the different user-defined coordinate planes. A classification example of powdered samples on the basis of their terahertz spectral signatures is used to demonstrate the advantages of the CELM classifiers compared to their SVM counterparts. The proposed classifiers retain the advantages of their ELM counterparts, in that they can perform multiclass classification with lower computational complexity than SVM classifiers. Furthermore, because of their ability to perform classification tasks fast, the proposed formulations are of interest to real-time applications.
Resumo:
Background Mothers' self-reported stroking of their infants over the first weeks of life modifies the association between prenatal depression and physiological and emotional reactivity at 7 months, consistent with animal studies of the effects of tactile stimulation. We now investigate whether the effects of maternal stroking persist to 2.5 years. Given animal and human evidence for sex differences in the effects of prenatal stress we compare associations in boys and girls. Method From a general population sample of 1233 first-time mothers recruited at 20 weeks gestation we drew a random sample of 316 for assessment at 32 weeks, stratified by reported inter-partner psychological abuse, a risk indicator for child development. Of these mothers, 243 reported at 5 and 9 weeks how often they stroked their infants, and completed the Child Behavior Checklist (CBCL) at 2.5 years post-delivery. Results There was a significant interaction between prenatal anxiety and maternal stroking in the prediction of CBCL internalizing (p = 0.001) and anxious/depressed scores (p < 0.001). The effects were stronger in females than males, and the three-way interaction prenatal anxiety × maternal stroking × sex of infant was significant for internalizing symptoms (p = 0.003). The interactions arose from an association between prenatal anxiety and internalizing symptoms only in the presence of low maternal stroking. Conclusions The findings are consistent with stable epigenetic effects, many sex specific, reported in animal studies. While epigenetic mechanisms may be underlying the associations, it remains to be established whether stroking affects gene expression in humans.
Resumo:
Atmospheric pollution over South Asia attracts special attention due to its effects on regional climate, water cycle and human health. These effects are potentially growing owing to rising trends of anthropogenic aerosol emissions. In this study, the spatio-temporal aerosol distributions over South Asia from seven global aerosol models are evaluated against aerosol retrievals from NASA satellite sensors and ground-based measurements for the period of 2000–2007. Overall, substantial underestimations of aerosol loading over South Asia are found systematically in most model simulations. Averaged over the entire South Asia, the annual mean aerosol optical depth (AOD) is underestimated by a range 15 to 44% across models compared to MISR (Multi-angle Imaging SpectroRadiometer), which is the lowest bound among various satellite AOD retrievals (from MISR, SeaWiFS (Sea-Viewing Wide Field-of-View Sensor), MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua and Terra). In particular during the post-monsoon and wintertime periods (i.e., October–January), when agricultural waste burning and anthropogenic emissions dominate, models fail to capture AOD and aerosol absorption optical depth (AAOD) over the Indo–Gangetic Plain (IGP) compared to ground-based Aerosol Robotic Network (AERONET) sunphotometer measurements. The underestimations of aerosol loading in models generally occur in the lower troposphere (below 2 km) based on the comparisons of aerosol extinction profiles calculated by the models with those from Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP) data. Furthermore, surface concentrations of all aerosol components (sulfate, nitrate, organic aerosol (OA) and black carbon (BC)) from the models are found much lower than in situ measurements in winter. Several possible causes for these common problems of underestimating aerosols in models during the post-monsoon and wintertime periods are identified: the aerosol hygroscopic growth and formation of secondary inorganic aerosol are suppressed in the models because relative humidity (RH) is biased far too low in the boundary layer and thus foggy conditions are poorly represented in current models, the nitrate aerosol is either missing or inadequately accounted for, and emissions from agricultural waste burning and biofuel usage are too low in the emission inventories. These common problems and possible causes found in multiple models point out directions for future model improvements in this important region.
Resumo:
Contemporary research in generative second language (L2) acquisition has attempted to address observable target-deviant aspects of L2 grammars within a UG-continuity framework (e.g. Lardiere 2000; Schwartz 2003; Sprouse 2004; Prévost & White 1999, 2000). With the aforementioned in mind, the independence of pragmatic and syntactic development, independently observed elsewhere (e.g. Grodzinsky & Reinhart 1993; Lust et al. 1986; Pacheco & Flynn 2005; Serratrice, Sorace & Paoli 2004), becomes particularly interesting. In what follows, I examine the resetting of the Null-Subject Parameter (NSP) for English learners of L2 Spanish. I argue that insensitivity to associated discoursepragmatic constraints on the discursive distribution of overt/null subjects accounts for what appear to be particular errors as a result of syntactic deficits. It is demonstrated that despite target-deviant performance, the majority must have native-like syntactic competence given their knowledge of the Overt Pronoun Constraint (Montalbetti 1984), a principle associated with the Spanish-type setting of the NSP.
Resumo:
We present and analyse a space–time discontinuous Galerkin method for wave propagation problems. The special feature of the scheme is that it is a Trefftz method, namely that trial and test functions are solution of the partial differential equation to be discretised in each element of the (space–time) mesh. The method considered is a modification of the discontinuous Galerkin schemes of Kretzschmar et al. (2014) and of Monk & Richter (2005). For Maxwell’s equations in one space dimension, we prove stability of the method, quasi-optimality, best approximation estimates for polynomial Trefftz spaces and (fully explicit) error bounds with high order in the meshwidth and in the polynomial degree. The analysis framework also applies to scalar wave problems and Maxwell’s equations in higher space dimensions. Some numerical experiments demonstrate the theoretical results proved and the faster convergence compared to the non-Trefftz version of the scheme.