935 resultados para 340402 Econometric and Statistical Methods
Resumo:
Multivariate statistical methods were used to investigate file Causes of toxicity and controls on groundwater chemistry from 274 boreholes in an Urban area (London) of the United Kingdom. The groundwater was alkaline to neutral, and chemistry was dominated by calcium, sodium, and Sulfate. Contaminants included fuels, solvents, and organic compounds derived from landfill material. The presence of organic material in the aquifer caused decreases in dissolved oxygen, sulfate and nitrate concentrations. and increases in ferrous iron and ammoniacal nitrogen concentrations. Pearson correlations between toxicity results and the concentration of individual analytes indicated that concentrations of ammoinacal nitrogen, dissolved oxygen, ferrous iron, and hydrocarbons were important where present. However, principal component and regression analysis suggested no significant correlation between toxicity and chemistry over the whole area. Multidimensional Scaling was used to investigate differences in sites caused by historical use, landfill gas status, or position within the sample area. Significant differences were observed between sites with different historical land use and those with different gas status. Examination of the principal component matrix revealed that these differences are related to changes in the importance of reduced chemical species.
Resumo:
The cyclin/cyclin-dependent kinase (Cdk) complexes and the Cdk inhibitors (CDKI) are crucial regulators of cell cycle progression in all eukaryotic cells. Using rat cardiac myocytes as a model system, this chapter provides a detailed account of methods that can be employed to measure both cyclin/Cdk activity in cells and the extent of CDKI inhibitory activity present in a particular cell type.
Resumo:
A novel framework for multimodal semantic-associative collateral image labelling, aiming at associating image regions with textual keywords, is described. Both the primary image and collateral textual modalities are exploited in a cooperative and complementary fashion. The collateral content and context based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix, of the visual keywords, A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. Finally, we use Self Organising Maps to examine the classification and retrieval effectiveness of the proposed high-level image feature vector model which is constructed based on the image labelling results.
Resumo:
A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
We explore the potential for making statistical decadal predictions of sea surface temperatures (SSTs) in a perfect model analysis, with a focus on the Atlantic basin. Various statistical methods (Lagged correlations, Linear Inverse Modelling and Constructed Analogue) are found to have significant skill in predicting the internal variability of Atlantic SSTs for up to a decade ahead in control integrations of two different global climate models (GCMs), namely HadCM3 and HadGEM1. Statistical methods which consider non-local information tend to perform best, but which is the most successful statistical method depends on the region considered, GCM data used and prediction lead time. However, the Constructed Analogue method tends to have the highest skill at longer lead times. Importantly, the regions of greatest prediction skill can be very different to regions identified as potentially predictable from variance explained arguments. This finding suggests that significant local decadal variability is not necessarily a prerequisite for skillful decadal predictions, and that the statistical methods are capturing some of the dynamics of low-frequency SST evolution. In particular, using data from HadGEM1, significant skill at lead times of 6–10 years is found in the tropical North Atlantic, a region with relatively little decadal variability compared to interannual variability. This skill appears to come from reconstructing the SSTs in the far north Atlantic, suggesting that the more northern latitudes are optimal for SST observations to improve predictions. We additionally explore whether adding sub-surface temperature data improves these decadal statistical predictions, and find that, again, it depends on the region, prediction lead time and GCM data used. Overall, we argue that the estimated prediction skill motivates the further development of statistical decadal predictions of SSTs as a benchmark for current and future GCM-based decadal climate predictions.
Resumo:
Whilst the vast majority of the research on property market forecasting has concentrated on statistical methods of forecasting future rents, this report investigates the process of property market forecast production with particular reference to the level and effect of judgemental intervention in this process. Expectations of future investment performance at the levels of individual asset, sector, region, country and asset class are crucial to stock selection and tactical and strategic asset allocation decisions. Given their centrality to investment performance, we focus on the process by which forecasts of rents and yields are generated and expectations formed. A review of the wider literature on forecasting suggests that there are strong grounds to expect that forecast outcomes are not the result of purely mechanical calculations.
Resumo:
Elephant poaching and the ivory trade remain high on the agenda at meetings of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). Well-informed debates require robust estimates of trends, the spatial distribution of poaching, and drivers of poaching. We present an analysis of trends and drivers of an indicator of elephant poaching of all elephant species. The site-based monitoring system known as Monitoring the Illegal Killing of Elephants (MIKE), set up by the 10th Conference of the Parties of CITES in 1997, produces carcass encounter data reported mainly by anti-poaching patrols. Data analyzed were site by year totals of 6,337 carcasses from 66 sites in Africa and Asia from 2002–2009. Analysis of these observational data is a serious challenge to traditional statistical methods because of the opportunistic and non-random nature of patrols, and the heterogeneity across sites. Adopting a Bayesian hierarchical modeling approach, we used the proportion of carcasses that were illegally killed (PIKE) as a poaching index, to estimate the trend and the effects of site- and country-level factors associated with poaching. Important drivers of illegal killing that emerged at country level were poor governance and low levels of human development, and at site level, forest cover and area of the site in regions where human population density is low. After a drop from 2002, PIKE remained fairly constant from 2003 until 2006, after which it increased until 2008. The results for 2009 indicate a decline. Sites with PIKE ranging from the lowest to the highest were identified. The results of the analysis provide a sound information base for scientific evidence-based decision making in the CITES process.
Resumo:
We consider the linear equality-constrained least squares problem (LSE) of minimizing ${\|c - Gx\|}_2 $, subject to the constraint $Ex = p$. A preconditioned conjugate gradient method is applied to the Kuhn–Tucker equations associated with the LSE problem. We show that our method is well suited for structural optimization problems in reliability analysis and optimal design. Numerical tests are performed on an Alliant FX/8 multiprocessor and a Cray-X-MP using some practical structural analysis data.
Resumo:
Background The persistence of rural-urban disparities in child nutrition outcomes in developing countries alongside rapid urbanisation and increasing incidence of child malnutrition in urban areas raises an important health policy question - whether fundamentally different nutrition policies and interventions are required in rural and urban areas. Addressing this question requires an enhanced understanding of the main drivers of rural-urban disparities in child nutrition outcomes especially for the vulnerable segments of the population. This study applies recently developed statistical methods to quantify the contribution of different socio-economic determinants to rural-urban differences in child nutrition outcomes in two South Asian countries – Bangladesh and Nepal. Methods Using DHS data sets for Bangladesh and Nepal, we apply quantile regression-based counterfactual decomposition methods to quantify the contribution of (1) the differences in levels of socio-economic determinants (covariate effects) and (2) the differences in the strength of association between socio-economic determinants and child nutrition outcomes (co-efficient effects) to the observed rural-urban disparities in child HAZ scores. The methodology employed in the study allows the covariate and coefficient effects to vary across entire distribution of child nutrition outcomes. This is particularly useful in providing specific insights into factors influencing rural-urban disparities at the lower tails of child HAZ score distributions. It also helps assess the importance of individual determinants and how they vary across the distribution of HAZ scores. Results There are no fundamental differences in the characteristics that determine child nutrition outcomes in urban and rural areas. Differences in the levels of a limited number of socio-economic characteristics – maternal education, spouse’s education and the wealth index (incorporating household asset ownership and access to drinking water and sanitation) contribute a major share of rural-urban disparities in the lowest quantiles of child nutrition outcomes. Differences in the strength of association between socio-economic characteristics and child nutrition outcomes account for less than a quarter of rural-urban disparities at the lower end of the HAZ score distribution. Conclusions Public health interventions aimed at overcoming rural-urban disparities in child nutrition outcomes need to focus principally on bridging gaps in socio-economic endowments of rural and urban households and improving the quality of rural infrastructure. Improving child nutrition outcomes in developing countries does not call for fundamentally different approaches to public health interventions in rural and urban areas.
Resumo:
Airborne high resolution in situ measurements of a large set of trace gases including ozone (O3) and total water (H2O) in the upper troposphere and the lowermost stratosphere (UT/LMS) have been performed above Europe within the SPURT project. SPURT provides an extensive data coverage of the UT/LMS in each season within the time period between November 2001 and July 2003. In the LMS a distinct spring maximum and autumn minimum is observed in O3, whereas its annual cycle in the UT is shifted by 2–3 months later towards the end of the year. The more variable H2O measurements reveal a maximum during summer and a minimum during autumn/winter with no phase shift between the two atmospheric compartments. For a comprehensive insight into trace gas composition and variability in the UT/LMS several statistical methods are applied using chemical, thermal and dynamical vertical coordinates. In particular, 2-dimensional probability distribution functions serve as a tool to transform localised aircraft data to a more comprehensive view of the probed atmospheric region. It appears that both trace gases, O3 and H2O, reveal the most compact arrangement and are best correlated in the view of potential vorticity (PV) and distance to the local tropopause, indicating an advanced mixing state on these surfaces. Thus, strong gradients of PV seem to act as a transport barrier both in the vertical and the horizontal direction. The alignment of trace gas isopleths reflects the existence of a year-round extra-tropical tropopause transition layer. The SPURT measurements reveal that this layer is mainly affected by stratospheric air during winter/spring and by tropospheric air during autumn/summer. Normalised mixing entropy values for O3 and H2O in the LMS appear to be maximal during spring and summer, respectively, indicating highest variability of these trace gases during the respective seasons.
Resumo:
Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.
Resumo:
As part of an international intercomparison project, a set of single column models (SCMs) and cloud-resolving models (CRMs) are run under the weak temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistent implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.
Resumo:
Intracellular reactive oxygen species (ROS) production is essential to normal cell function. However, excessive ROS production causes oxidative damage and cell death. Many pharmacological compounds exert their effects on cell cycle progression by changing intracellular redox state and in many cases cause oxidative damage leading to drug cytotoxicity. Appropriate measurement of intracellular ROS levels during cell cycle progression is therefore crucial in understanding redox-regulation of cell function and drug toxicity and for the development of new drugs. However, due to the extremely short half-life of ROS, measuring the changes in intracellular ROS levels during a particular phase of cell cycle for drug intervention can be challenging. In this article, we have provided updated information on the rationale, the applications, the advantages and limitations of common methods for screening drug effects on intracellular ROS production linked to cell cycle study. Our aim is to facilitate biomedical scientists and researchers in the pharmaceutical industry in choosing or developing specific experimental regimens to suit their research needs.
Resumo:
Forecasting wind power is an important part of a successful integration of wind power into the power grid. Forecasts with lead times longer than 6 h are generally made by using statistical methods to post-process forecasts from numerical weather prediction systems. Two major problems that complicate this approach are the non-linear relationship between wind speed and power production and the limited range of power production between zero and nominal power of the turbine. In practice, these problems are often tackled by using non-linear non-parametric regression models. However, such an approach ignores valuable and readily available information: the power curve of the turbine's manufacturer. Much of the non-linearity can be directly accounted for by transforming the observed power production into wind speed via the inverse power curve so that simpler linear regression models can be used. Furthermore, the fact that the transformed power production has a limited range can be taken care of by employing censored regression models. In this study, we evaluate quantile forecasts from a range of methods: (i) using parametric and non-parametric models, (ii) with and without the proposed inverse power curve transformation and (iii) with and without censoring. The results show that with our inverse (power-to-wind) transformation, simpler linear regression models with censoring perform equally or better than non-linear models with or without the frequently used wind-to-power transformation.
Resumo:
Recruitment of patients to a clinical trial usually occurs over a period of time, resulting in the steady accumulation of data throughout the trial's duration. Yet, according to traditional statistical methods, the sample size of the trial should be determined in advance, and data collected on all subjects before analysis proceeds. For ethical and economic reasons, the technique of sequential testing has been developed to enable the examination of data at a series of interim analyses. The aim is to stop recruitment to the study as soon as there is sufficient evidence to reach a firm conclusion. In this paper we present the advantages and disadvantages of conducting interim analyses in phase III clinical trials, together with the key steps to enable the successful implementation of sequential methods in this setting. Examples are given of completed trials, which have been carried out sequentially, and references to relevant literature and software are provided.