49 resultados para 340402 Econometric and Statistical Methods
Resumo:
G3B3 and G2MP2 calculations using Gaussian 03 have been carried out to investigate the protonation preferences for phenylboronic acid. All nine heavy atoms have been protonated in turn. With both methodologies, the two lowest protonation energies are obtained with the proton located either at the ipso carbon atom or at a hydroxyl oxygen atom. Within the G3B3 formalism, the lowest-energy configuration by 4.3 kcal . mol(-1) is found when the proton is located at the ipso carbon, rather than at the electronegative oxygen atom. In the resulting structure, the phenyl ring has lost a significant amount of aromaticity. By contrast, calculations with G2MP2 show that protonation at the hydroxyl oxygen atom is favored by 7.7 kcal . mol(-1). Calculations using the polarizable continuum model (PCM) solvent method also give preference to protonation at the oxygen atom when water is used as the solvent. The preference for protonation at the ipso carbon found by the more accurate G3B3 method is unexpected and its implications in Suzuki coupling are discussed. (C) 2006 Wiley Periodicals, Inc.
Resumo:
Ranald Roderick Macdonald (1945-2007) was an important contributor to mathematical psychology in the UK, as a referee and action editor for British Journal of Mathematical and Statistical Psychology and as a participant and organizer at the British Psychological Society's Mathematics, statistics and computing section meetings. This appreciation argues that his most important contribution was to the foundations of significance testing, where his concern about what information was relevant in interpreting the results of significance tests led him to be a persuasive advocate for the 'Weak Fisherian' form of hypothesis testing.
Resumo:
In this paper, we address issues in segmentation Of remotely sensed LIDAR (LIght Detection And Ranging) data. The LIDAR data, which were captured by airborne laser scanner, contain 2.5 dimensional (2.5D) terrain surface height information, e.g. houses, vegetation, flat field, river, basin, etc. Our aim in this paper is to segment ground (flat field)from non-ground (houses and high vegetation) in hilly urban areas. By projecting the 2.5D data onto a surface, we obtain a texture map as a grey-level image. Based on the image, Gabor wavelet filters are applied to generate Gabor wavelet features. These features are then grouped into various windows. Among these windows, a combination of their first and second order of statistics is used as a measure to determine the surface properties. The test results have shown that ground areas can successfully be segmented from LIDAR data. Most buildings and high vegetation can be detected. In addition, Gabor wavelet transform can partially remove hill or slope effects in the original data by tuning Gabor parameters.
Resumo:
This article compares the results obtained from using two different methodological approaches to elicit teachers’ views on their professional role, the key challenges and their aspirations for the future. One approach used a postal/online questionnaire, while the other used telephone interviews, posing a selection of the same questions. The research was carried out on two statistically comparable samples of teachers in England in spring 2004. Significant differences in responses were observed which seem to be attributable to the methods employed. In particular, more ‘definite’ responses were obtained in the interviews than in response to the questionnaire. This article reviews the comparative outcomes in the context of existing research and explores why the separate methods may have produced significantly different responses to the same questions.
Resumo:
The development of a combined engineering and statistical Artificial Neural Network model of UK domestic appliance load profiles is presented. The model uses diary-style appliance use data and a survey questionnaire collected from 51 suburban households and 46 rural households during the summer of 2010 and2011 respectively. It also incorporates measured energy data and is sensitive to socioeconomic, physical dwelling and temperature variables. A prototype model is constructed in MATLAB using a two layer feed forward network with back propagation training which has a 12:10:24 architecture. Model outputs include appliance load profiles which can be applied to the fields of energy planning (microrenewables and smart grids), building simulation tools and energy policy.
Resumo:
In this study, we compare two different cyclone-tracking algorithms to detect North Atlantic polar lows, which are very intense mesoscale cyclones. Both approaches include spatial filtering, detection, tracking and constraints specific to polar lows. The first method uses digital bandpass-filtered mean sea level pressure (MSLP) fieldsin the spatial range of 200�600 km and is especially designed for polar lows. The second method also uses a bandpass filter but is based on the discrete cosine transforms (DCT) and can be applied to MSLP and vorticity fields. The latter was originally designed for cyclones in general and has been adapted to polar lows for this study. Both algorithms are applied to the same regional climate model output fields from October 1993 to September 1995 produced from dynamical downscaling of the NCEP/NCAR reanalysis data. Comparisons between these two methods show that different filters lead to different numbers and locations of tracks. The DCT is more precise in scale separation than the digital filter and the results of this study suggest that it is more suited for the bandpass filtering of MSLP fields. The detection and tracking parts also influence the numbers of tracks although less critically. After a selection process that applies criteria to identify tracks of potential polar lows, differences between both methods are still visible though the major systems are identified in both.
Resumo:
Our new molecular understanding of immune priming states that dendritic cell activation is absolutely pivotal for expansion and differentiation of naïve T lymphocytes, and it follows that understanding DC activation is essential to understand and design vaccine adjuvants. This chapter describes how dendritic cells can be used as a core tool to provide detailed quantitative and predictive immunomics information about how adjuvants function. The role of distinct antigen, costimulation, and differentiation signals from activated DC in priming is explained. Four categories of input signals which control DC activation – direct pathogen detection, sensing of injury or cell death, indirect activation via endogenous proinflammatory mediators, and feedback from activated T cells – are compared and contrasted. Practical methods for studying adjuvants using DC are summarised and the importance of DC subset choice, simulating T cell feedback, and use of knockout cells is highlighted. Finally, five case studies are examined that illustrate the benefit of DC activation analysis for understanding vaccine adjuvant function.
Resumo:
The paper considers second kind equations of the form (abbreviated x=y + K2x) in which and the factor z is bounded but otherwise arbitrary so that equations of Wiener-Hopf type are included as a special case. Conditions on a set are obtained such that a generalized Fredholm alternative is valid: if W satisfies these conditions and I − Kz, is injective for each z ε W then I − Kz is invertible for each z ε W and the operators (I − Kz)−1 are uniformly bounded. As a special case some classical results relating to Wiener-Hopf operators are reproduced. A finite section version of the above equation (with the range of integration reduced to [−a, a]) is considered, as are projection and iterated projection methods for its solution. The operators (where denotes the finite section version of Kz) are shown uniformly bounded (in z and a) for all a sufficiently large. Uniform stability and convergence results, for the projection and iterated projection methods, are obtained. The argument generalizes an idea in collectively compact operator theory. Some new results in this theory are obtained and applied to the analysis of projection methods for the above equation when z is compactly supported and k(s − t) replaced by the general kernel k(s,t). A boundary integral equation of the above type, which models outdoor sound propagation over inhomogeneous level terrain, illustrates the application of the theoretical results developed.
Resumo:
Objective To model the overall and income specific effect of a 20% tax on sugar sweetened drinks on the prevalence of overweight and obesity in the UK. Design Econometric and comparative risk assessment modelling study. Setting United Kingdom. Population Adults aged 16 and over. Intervention A 20% tax on sugar sweetened drinks. Main outcome measures The primary outcomes were the overall and income specific changes in the number and percentage of overweight (body mass index ≥25) and obese (≥30) adults in the UK following the implementation of the tax. Secondary outcomes were the effect by age group (16-29, 30-49, and ≥50 years) and by UK constituent country. The revenue generated from the tax and the income specific changes in weekly expenditure on drinks were also estimated. Results A 20% tax on sugar sweetened drinks was estimated to reduce the number of obese adults in the UK by 1.3% (95% credible interval 0.8% to 1.7%) or 180 000 (110 000 to 247 000) people and the number who are overweight by 0.9% (0.6% to 1.1%) or 285 000 (201 000 to 364 000) people. The predicted reductions in prevalence of obesity for income thirds 1 (lowest income), 2, and 3 (highest income) were 1.3% (0.3% to 2.0%), 0.9% (0.1% to 1.6%), and 2.1% (1.3% to 2.9%). The effect on obesity declined with age. Predicted annual revenue was £276m (£272m to £279m), with estimated increases in total expenditure on drinks for income thirds 1, 2, and 3 of 2.1% (1.4% to 3.0%), 1.7% (1.2% to 2.2%), and 0.8% (0.4% to 1.2%). Conclusions A 20% tax on sugar sweetened drinks would lead to a reduction in the prevalence of obesity in the UK of 1.3% (around 180 000 people). The greatest effects may occur in young people, with no significant differences between income groups. Both effects warrant further exploration. Taxation of sugar sweetened drinks is a promising population measure to target population obesity, particularly among younger adults.
Resumo:
Objectives To model the impact on chronic disease of a tax on UK food and drink that internalises the wider costs to society of greenhouse gas (GHG) emissions and to estimate the potential revenue. Design An econometric and comparative risk assessment modelling study. Setting The UK. Participants The UK adult population. Interventions Two tax scenarios are modelled: (A) a tax of £2.72/tonne carbon dioxide equivalents (tCO2e)/100 g product applied to all food and drink groups with above average GHG emissions. (B) As with scenario (A) but food groups with emissions below average are subsidised to create a tax neutral scenario. Outcome measures Primary outcomes are change in UK population mortality from chronic diseases following the implementation of each taxation strategy, the change in the UK GHG emissions and the predicted revenue. Secondary outcomes are the changes to the micronutrient composition of the UK diet. Results Scenario (A) results in 7770 (95% credible intervals 7150 to 8390) deaths averted and a reduction in GHG emissions of 18 683 (14 665to 22 889) ktCO2e/year. Estimated annual revenue is £2.02 (£1.98 to £2.06) billion. Scenario (B) results in 2685 (1966 to 3402) extra deaths and a reduction in GHG emissions of 15 228 (11 245to 19 492) ktCO2e/year. Conclusions Incorporating the societal cost of GHG into the price of foods could save 7770 lives in the UK each year, reduce food-related GHG emissions and generate substantial tax revenue. The revenue neutral scenario (B) demonstrates that sustainability and health goals are not always aligned. Future work should focus on investigating the health impact by population subgroup and on designing fiscal strategies to promote both sustainable and healthy diets.
Resumo:
The weak-constraint inverse for nonlinear dynamical models is discussed and derived in terms of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak-constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. they also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. the feasibility of the new methods is illustrated in a two-layer quasigeostrophic model.
Resumo:
The growing human population will require a significant increase in agricultural production. This challenge is made more difficult by the fact that changes in the climatic and environmental conditions under which crops are grown have resulted in the appearance of new diseases, whereas genetic changes within the pathogen have resulted in the loss of previously effective sources of resistance. To help meet this challenge, advanced genetic and statistical methods of analysis have been used to identify new resistance genes through global screens, and studies of plant-pathogen interactions have been undertaken to uncover the mechanisms by which disease resistance is achieved. The informed deployment of major, race-specific and partial, race-nonspecific resistance, either by conventional breeding or transgenic approaches, will enable the production of crop varieties with effective resistance without impacting on other agronomically important crop traits. Here, we review these recent advances and progress towards the ultimate goal of developing disease-resistant crops.