981 resultados para Stochastically constrained problems
Resumo:
This contribution proposes a novel probability density function (PDF) estimation based over-sampling (PDFOS) approach for two-class imbalanced classification problems. The classical Parzen-window kernel function is adopted to estimate the PDF of the positive class. Then according to the estimated PDF, synthetic instances are generated as the additional training data. The essential concept is to re-balance the class distribution of the original imbalanced data set under the principle that synthetic data sample follows the same statistical properties. Based on the over-sampled training data, the radial basis function (RBF) classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier’s structure and the parameters of RBF kernels are determined using a particle swarm optimisation algorithm based on the criterion of minimising the leave-one-out misclassification rate. The effectiveness of the proposed PDFOS approach is demonstrated by the empirical study on several imbalanced data sets.
Resumo:
Background: Although it is well-established that children with language impairment (LI) and children with autism spectrum disorders (ASD) both show elevated levels of emotional and behavioural problems, the level and types of difficulties across the two groups have not previously been directly compared. Aims: To compare levels of emotional and behavioural problems in children with LI and children with ASD recruited from the same mainstream schools. Methods & Procedures: We measured teacher-reported emotional and behavioural problems using the Strengths and Difficulties Questionnaire (SDQ) in a sample of 5-to-13-year old children with LI (N=62) and children with ASD (N=42) attending mainstream school but with identified special educational needs. Outcomes & Results: Both groups showed similarly elevated levels of emotional, conduct and hyperactivity problems. The only differences between the LI and ASD groups were on subscales assessing peer problems (which were higher in the ASD group) and prosocial behaviours (which were higher in the LI group). Overall, there were few associations between emotional and behavioural problems and child characteristics, reflecting the pervasive nature of these difficulties in children with LI and children with ASD, although levels of problems were higher in children with ASD with lower language ability. However, in the ASD group only, a measure of family social economic status was associated with language ability and attenuated the association between language ability and emotional and behavioural problems. Conclusions & Implications: Children with LI and children with ASD in mainstream school show similarly elevated levels of emotional and behavioural problems, which require monitoring and may benefit from intervention. Further work is required to identify the child, family and situational factors that place children with LI and children with ASD at risk of emotional and behavioural problems, and whether these differ between the two groups. This work can then guide the application of evidence-based interventions to these children.
Resumo:
The psychiatric and psychosocial evaluation of the heart transplant candidate can identify particular predictors for postoperative problems. These factors, as identified during the comprehensive evaluation phase, provide an assessment of the candidate in context of the proposed transplantation protocol. Previous issues with compliance, substance abuse, and psychosis are clear indictors of postoperative problems. The prolonged waiting list time provides an additional period to evaluate and provide support to patients having a terminal disease who need a heart transplant, and are undergoing prolonged hospitalization. Following transplantation, the patient is faced with additional challenges of a new self-image, multiple concerns, anxiety, and depression. Ultimately, the success of the heart transplantation remains dependent upon the recipient's ability to cope psychologically and comply with the medication regimen. The limited resource of donor hearts and the high emotional and financial cost of heart transplantation lead to an exhaustive effort to select those patients who will benefit from the improved physical health the heart transplant confers.
Resumo:
Variational data assimilation is commonly used in environmental forecasting to estimate the current state of the system from a model forecast and observational data. The assimilation problem can be written simply in the form of a nonlinear least squares optimization problem. However the practical solution of the problem in large systems requires many careful choices to be made in the implementation. In this article we present the theory of variational data assimilation and then discuss in detail how it is implemented in practice. Current solutions and open questions are discussed.
Resumo:
Biological models of an apoptotic process are studied using models describing a system of differential equations derived from reaction kinetics information. The mathematical model is re-formulated in a state-space robust control theory framework where parametric and dynamic uncertainty can be modelled to account for variations naturally occurring in biological processes. We propose to handle the nonlinearities using neural networks.
Resumo:
Incomplete understanding of three aspects of the climate system—equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing—and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century1,2. Explorations of these uncertainties have so far relied on scaling approaches3,4, large ensembles of simplified climate models1,2, or small ensembles of complex coupled atmosphere–ocean general circulation models5,6 which under-represent uncertainties in key climate system properties derived from independent sources7–9. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4–3 K by 2050, relative to 1961–1990, under a mid-range forcing scenario. This range of warming is broadly consistent with the expert assessment provided by the Intergovernmental Panel on Climate Change Fourth Assessment Report10, but extends towards larger warming than observed in ensemblesof-opportunity5 typically used for climate impact assessments. From our simulations, we conclude that warming by the middle of the twenty-first century that is stronger than earlier estimates is consistent with recent observed temperature changes and a mid-range ‘no mitigation’ scenario for greenhouse-gas emissions.
Resumo:
In this review I summarise some of the most significant advances of the last decade in the analysis and solution of boundary value problems for integrable partial differential equations in two independent variables. These equations arise widely in mathematical physics, and in order to model realistic applications, it is essential to consider bounded domain and inhomogeneous boundary conditions. I focus specifically on a general and widely applicable approach, usually referred to as the Unified Transform or Fokas Transform, that provides a substantial generalisation of the classical Inverse Scattering Transform. This approach preserves the conceptual efficiency and aesthetic appeal of the more classical transform approaches, but presents a distinctive and important difference. While the Inverse Scattering Transform follows the "separation of variables" philosophy, albeit in a nonlinear setting, the Unified Transform is a based on the idea of synthesis, rather than separation, of variables. I will outline the main ideas in the case of linear evolution equations, and then illustrate their generalisation to certain nonlinear cases of particular significance.
Resumo:
In recent years several methodologies have been developed to combine and interpret ensembles of climate models with the aim of quantifying uncertainties in climate projections. Constrained climate model forecasts have been generated by combining various choices of metrics used to weight individual ensemble members, with diverse approaches to sampling the ensemble. The forecasts obtained are often significantly different, even when based on the same model output. Therefore, a climate model forecast classification system can serve two roles: to provide a way for forecast producers to self-classify their forecasts; and to provide information on the methodological assumptions underlying the forecast generation and its uncertainty when forecasts are used for impacts studies. In this review we propose a possible classification system based on choices of metrics and sampling strategies. We illustrate the impact of some of the possible choices in the uncertainty quantification of large scale projections of temperature and precipitation changes, and briefly discuss possible connections between climate forecast uncertainty quantification and decision making approaches in the climate change context.
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.
Resumo:
We use the elliptic reconstruction technique in combination with a duality approach to prove a posteriori error estimates for fully discrete backward Euler scheme for linear parabolic equations. As an application, we combine our result with the residual based estimators from the a posteriori estimation for elliptic problems to derive space-error indicators and thus a fully practical version of the estimators bounding the error in the $ \mathrm {L}_{\infty }(0,T;\mathrm {L}_2(\varOmega ))$ norm. These estimators, which are of optimal order, extend those introduced by Eriksson and Johnson in 1991 by taking into account the error induced by the mesh changes and allowing for a more flexible use of the elliptic estimators. For comparison with previous results we derive also an energy-based a posteriori estimate for the $ \mathrm {L}_{\infty }(0,T;\mathrm {L}_2(\varOmega ))$-error which simplifies a previous one given by Lakkis and Makridakis in 2006. We then compare both estimators (duality vs. energy) in practical situations and draw conclusions.
Resumo:
Background Mothers' self-reported stroking of their infants over the first weeks of life modifies the association between prenatal depression and physiological and emotional reactivity at 7 months, consistent with animal studies of the effects of tactile stimulation. We now investigate whether the effects of maternal stroking persist to 2.5 years. Given animal and human evidence for sex differences in the effects of prenatal stress we compare associations in boys and girls. Method From a general population sample of 1233 first-time mothers recruited at 20 weeks gestation we drew a random sample of 316 for assessment at 32 weeks, stratified by reported inter-partner psychological abuse, a risk indicator for child development. Of these mothers, 243 reported at 5 and 9 weeks how often they stroked their infants, and completed the Child Behavior Checklist (CBCL) at 2.5 years post-delivery. Results There was a significant interaction between prenatal anxiety and maternal stroking in the prediction of CBCL internalizing (p = 0.001) and anxious/depressed scores (p < 0.001). The effects were stronger in females than males, and the three-way interaction prenatal anxiety × maternal stroking × sex of infant was significant for internalizing symptoms (p = 0.003). The interactions arose from an association between prenatal anxiety and internalizing symptoms only in the presence of low maternal stroking. Conclusions The findings are consistent with stable epigenetic effects, many sex specific, reported in animal studies. While epigenetic mechanisms may be underlying the associations, it remains to be established whether stroking affects gene expression in humans.