893 resultados para data analysis: algorithms and implementation
Resumo:
The MATLAB model is contained within the compressed folders (versions are available as .zip and .tgz). This model uses MERRA reanalysis data (>34 years available) to estimate the hourly aggregated wind power generation for a predefined (fixed) distribution of wind farms. A ready made example is included for the wind farm distribution of Great Britain, April 2014 ("CF.dat"). This consists of an hourly time series of GB-total capacity factor spanning the period 1980-2013 inclusive. Given the global nature of reanalysis data, the model can be applied to any specified distribution of wind farms in any region of the world. Users are, however, strongly advised to bear in mind the limitations of reanalysis data when using this model/data. This is discussed in our paper: Cannon, Brayshaw, Methven, Coker, Lenaghan. "Using reanalysis data to quantify extreme wind power generation statistics: a 33 year case study in Great Britain". Submitted to Renewable Energy in March, 2014. Additional information about the model is contained in the model code itself, in the accompanying ReadMe file, and on our website: http://www.met.reading.ac.uk/~energymet/data/Cannon2014/
Resumo:
A parameterization of mesoscale eddies in coarse-resolution ocean general circulation models (GCM) is formulated and implemented using a residual-mean formalism. In that framework, mean buoyancy is advected by the residual velocity (the sum of the Eulerian and eddy-induced velocities) and modified by a residual flux which accounts for the diabatic effects of mesoscale eddies. The residual velocity is obtained by stepping forward a residual-mean momentum equation in which eddy stresses appear as forcing terms. Study of the spatial distribution of eddy stresses, derived by using them as control parameters to ‘‘fit’’ the residual-mean model to observations, supports the idea that eddy stresses can be likened to a vertical down-gradient flux of momentum with a coefficient which is constant in the vertical. The residual eddy flux is set to zero in the ocean interior, where mesoscale eddies are assumed to be quasi-adiabatic, but is parameterized by a horizontal down-gradient diffusivity near the surface where eddies develop a diabatic component as they stir properties horizontally across steep isopycnals. The residual-mean model is implemented and tested in the MIT general circulation model. It is shown that the resulting model (1) has a climatology that is superior to that obtained using the Gent and McWilliams parameterization scheme with a spatially uniform diffusivity and (2) allows one to significantly reduce the (spurious) horizontal viscosity used in coarse resolution GCMs.
Resumo:
Chongqing is the largest central-government-controlled municipality in China, which is now under going a rapid urbanization. The question remains open: What are the consequences of such rapid urbanization in Chongqing in terms of urban microclimates? An integrated study comprising three different research approaches is adopted in the present paper. By analyzing the observed annual climate data, an average rising trend of 0.10◦C/decade was found for the annual mean temperature from 1951 to 2010 in Chongqing,indicating a higher degree of urban warming in Chongqing. In addition, two complementary types of field measurements were conducted: fixed weather stations and mobile transverse measurement. Numerical simulations using a house-developed program are able to predict the urban air temperature in Chongqing.The urban heat island intensity in Chongqing is stronger in summer compared to autumn and winter.The maximum urban heat island intensity occurs at around midnight, and can be as high as 2.5◦C. In the day time, an urban cool island exists. Local greenery has a great impact on the local thermal environment.Urban green spaces can reduce urban air temperature and therefore mitigate the urban heat island. The cooling effect of an urban river is limited in Chongqing, as both sides of the river are the most developed areas, but the relative humidity is much higher near the river compared with the places far from it.
Resumo:
The present study aims to investigate the dose dependent effects of consuming diets enriched in flavonoid-rich and flavonoid-poor fruits and vegetables on the urine metabolome of adults who had a C1.5 fold increased risk of cardiovascular diseases. A single-blind, dose-dependent, parallel randomized controlled dietary intervention was conducted where volunteers (n = 126) were randomly assigned to one of three diets: high flavonoid diet, low flavonoid diet or habitual diet as a control for 18 weeks. High resolution LC– MS untargeted metabolomics with minimal sample cleanup was performed using an Orbitrap mass spectrometer. Putative biomarkers which characterize diets with high and low flavonoid content were selected by state-of-the-art data analysis strategies and identified by HR-MS and HR-MS/MS assays. Discrimination between diets was observed by application of two linear mixedmodels: one including a diet-time interaction effect and the second containing only a time effect. Valerolactones, phenolic acids and their derivatives were among sixteen biomarkers related to the high flavonoid dietary exposure. Four biomarkers related to the low flavonoid diet belonged to the family of phenolic acids. For the first time abscisic acid glucuronide was reported as a biomarker after a dietary intake, however its origins have to be examined by future hypothesis driven experiments using a more targeted approach. This metabolomic analysis has identified a number of dose dependent urinary biomarkers (i.e. proline betaine or iberin-N-acetyl cysteine), which can be used in future observation and intervention studies to assess flavonoids and nonflavonoid phenolic intakes and compliance to fruit and vegetable intervention.
Resumo:
For a fixed family F of graphs, an F-packing in a graph G is a set of pairwise vertex-disjoint subgraphs of G, each isomorphic to an element of F. Finding an F-packing that maximizes the number of covered edges is a natural generalization of the maximum matching problem, which is just F = {K(2)}. In this paper we provide new approximation algorithms and hardness results for the K(r)-packing problem where K(r) = {K(2), K(3,) . . . , K(r)}. We show that already for r = 3 the K(r)-packing problem is APX-complete, and, in fact, we show that it remains so even for graphs with maximum degree 4. On the positive side, we give an approximation algorithm with approximation ratio at most 2 for every fixed r. For r = 3, 4, 5 we obtain better approximations. For r = 3 we obtain a simple 3/2-approximation, achieving a known ratio that follows from a more involved algorithm of Halldorsson. For r = 4, we obtain a (3/2 + epsilon)-approximation, and for r = 5 we obtain a (25/14 + epsilon)-approximation. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper describes the development and evaluation of a sequential injection method to automate the determination of methyl parathion by square wave adsorptive cathodic stripping voltammetry exploiting the concept of monosegmented flow analysis to perform in-line sample conditioning and standard addition. Accumulation and stripping steps are made in the sample medium conditioned with 40 mmol L-1 Britton-Robinson buffer (pH 10) in 0.25 mol L-1 NaNO3. The homogenized mixture is injected at a flow rate of 10 mu Ls(-1) toward the flow cell, which is adapted to the capillary of a hanging drop mercury electrode. After a suitable deposition time, the flow is stopped and the potential is scanned from -0.3 to -1.0 V versus Ag/AgCl at frequency of 250 Hz and pulse height of 25 mV The linear dynamic range is observed for methyl parathion concentrations between 0.010 and 0.50 mgL(-1), with detection and quantification limits of 2 and 7 mu gL(-1), respectively. The sampling throughput is 25 h(-1) if the in line standard addition and sample conditioning protocols are followed, but this frequency can be increased up to 61 h(-1) if the sample is conditioned off-line and quantified using an external calibration curve. The method was applied for determination of methyl parathion in spiked water samples and the accuracy was evaluated either by comparison to high performance liquid chromatography with UV detection, or by the recovery percentages. Although no evidences of statistically significant differences were observed between the expected and obtained concentrations, because of the susceptibility of the method to interference by other pesticides (e.g., parathion, dichlorvos) and natural organic matter (e.g., fulvic and humic acids), isolation of the analyte may be required when more complex sample matrices are encountered. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This work aims at combining the Chaos theory postulates and Artificial Neural Networks classification and predictive capability, in the field of financial time series prediction. Chaos theory, provides valuable qualitative and quantitative tools to decide on the predictability of a chaotic system. Quantitative measurements based on Chaos theory, are used, to decide a-priori whether a time series, or a portion of a time series is predictable, while Chaos theory based qualitative tools are used to provide further observations and analysis on the predictability, in cases where measurements provide negative answers. Phase space reconstruction is achieved by time delay embedding resulting in multiple embedded vectors. The cognitive approach suggested, is inspired by the capability of some chartists to predict the direction of an index by looking at the price time series. Thus, in this work, the calculation of the embedding dimension and the separation, in Takens‘ embedding theorem for phase space reconstruction, is not limited to False Nearest Neighbor, Differential Entropy or other specific method, rather, this work is interested in all embedding dimensions and separations that are regarded as different ways of looking at a time series by different chartists, based on their expectations. Prior to the prediction, the embedded vectors of the phase space are classified with Fuzzy-ART, then, for each class a back propagation Neural Network is trained to predict the last element of each vector, whereas all previous elements of a vector are used as features.
Resumo:
The aim of this study is to evaluate the variation of solar radiation data between different data sources that will be free and available at the Solar Energy Research Center (SERC). The comparison between data sources will be carried out for two locations: Stockholm, Sweden and Athens, Greece. For the desired locations, data is gathered for different tilt angles: 0°, 30°, 45°, 60° facing south. The full dataset is available in two excel files: “Stockholm annual irradiation” and “Athens annual irradiation”. The World Radiation Data Center (WRDC) is defined as a reference for the comparison with other dtaasets, because it has the highest time span recorded for Stockholm (1964–2010) and Athens (1964–1986), in form of average monthly irradiation, expressed in kWh/m2. The indicator defined for the data comparison is the estimated standard deviation. The mean biased error (MBE) and the root mean square error (RMSE) were also used as statistical indicators for the horizontal solar irradiation data. The variation in solar irradiation data is categorized in two categories: natural or inter-annual variability, due to different data sources and lastly due to different calculation models. The inter-annual variation for Stockholm is 140.4kWh/m2 or 14.4% and 124.3kWh/m2 or 8.0% for Athens. The estimated deviation for horizontal solar irradiation is 3.7% for Stockholm and 4.4% Athens. This estimated deviation is respectively equal to 4.5% and 3.6% for Stockholm and Athens at 30° tilt, 5.2% and 4.5% at 45° tilt, 5.9% and 7.0% at 60°. NASA’s SSE, SAM and RETScreen (respectively Satel-light) exhibited the highest deviation from WRDC’s data for Stockholm (respectively Athens). The essential source for variation is notably the difference in horizontal solar irradiation. The variation increases by 1-2% per degree of tilt, using different calculation models, as used in PVSYST and Meteonorm. The location and altitude of the data source did not directly influence the variation with the WRDC data. Further examination is suggested in order to improve the methodology of selecting the location; Examining the functional dependence of ground reflected radiation with ambient temperature; variation of ambient temperature and its impact on different solar energy systems; Im pact of variation in solar irradiation and ambient temperature on system output.
Resumo:
Background: The gap between what is known and what is practiced results in health service users not benefitting from advances in healthcare, and in unnecessary costs. A supportive context is considered a key element for successful implementation of evidence-based practices (EBP). There were no tools available for the systematic mapping of aspects of organizational context influencing the implementation of EBPs in low- and middle-income countries (LMICs). Thus, this project aimed to develop and psychometrically validate a tool for this purpose. Methods: The development of the Context Assessment for Community Health (COACH) tool was premised on the context dimension in the Promoting Action on Research Implementation in Health Services framework, and is a derivative product of the Alberta Context Tool. Its development was undertaken in Bangladesh, Vietnam, Uganda, South Africa and Nicaragua in six phases: (1) defining dimensions and draft tool development, (2) content validity amongst in-country expert panels, (3) content validity amongst international experts, (4) response process validity, (5) translation and (6) evaluation of psychometric properties amongst 690 health workers in the five countries. Results: The tool was validated for use amongst physicians, nurse/midwives and community health workers. The six phases of development resulted in a good fit between the theoretical dimensions of the COACH tool and its psychometric properties. The tool has 49 items measuring eight aspects of context: Resources, Community engagement, Commitment to work, Informal payment, Leadership, Work culture, Monitoring services for action and Sources of knowledge. Conclusions: Aspects of organizational context that were identified as influencing the implementation of EBPs in high-income settings were also found to be relevant in LMICs. However, there were additional aspects of context of relevance in LMICs specifically Resources, Community engagement, Commitment to work and Informal payment. Use of the COACH tool will allow for systematic description of the local healthcare context prior implementing healthcare interventions to allow for tailoring implementation strategies or as part of the evaluation of implementing healthcare interventions and thus allow for deeper insights into the process of implementing EBPs in LMICs.
Resumo:
Institutions seeking to increase graduate enrollment consider incentivizing program growth. This report outlines ways that institutions allow graduate programs to keep surplus revenue, including tuition rebates, funding proportional to credit-hours, and decreased tax rates. It also examines scholarship programs created to increase admitted graduate student yield, new program offerings, and ongoing unit review.