29 resultados para social value of the place
em CentAUR: Central Archive University of Reading - UK
Resumo:
University students suffer from variable sleep patterns including insomnia;[1] furthermore, the highest incidence of herbal use appears to be among college graduates.[2] Our objective was to test the perception of safety and value of herbal against conventional medicine for the treatment of insomnia in a non-pharmacy student population. We used an experimental design and bespoke vignettes that relayed the same effectiveness information to test our hypothesis that students would give higher ratings of safety and value to herbal product compared to conventional medicine. We tested another hypothesis that the addition of side-effect information would lower people’s perception of the safety and value of the herbal product to a greater extent than it would with the conventional medicine.
Resumo:
Background Indiscriminate social approach behaviour is a salient aspect of the Williams syndrome (WS) behavioural phenotype. The present study examines approach behaviour in preschoolers with WS and evaluates the role of the face in WS social approach behaviour. Method Ten preschoolers with WS (aged 3-6 years) and two groups of typically developing children, matched to the WS group on chronological or mental age, participated in an observed play session. The play session incorporated social and non-social components including two components that assessed approach behaviour towards strangers, one in which the stranger’s face could be seen and one in which the stranger’s face was covered. Results In response to the non-social aspects of the play session, the WS group behaved similarly to both control groups. In contrast, the preschoolers with WS were significantly more willing than either control group to engage with a stranger, even when the stranger’s face could not be seen. Conclusion The findings challenge the hypothesis that an unusual attraction to the face directly motivates social approach behaviour in individuals with WS.
Resumo:
This paper deconstructs the relationship between the Environmental Sustainability Index (ESI) and national income. The ESI attempts to provide a single figure which encapsulates environmental sustainability' for each country included in the analysis, and this allied with a 'league table' format so as to name and shame bad performers, has resulted in widespread reporting within the popular presses of a number of countries. In essence, the higher the value of the ESI then the more 'environmentally sustainable' a country is deemed to be. A logical progression beyond the use of the ESI to publicise environmental sustainability is its use within a more analytical context. Thus an index designed to simplify in order to have an impact on policy is used to try and understand causes of good and bad performance in environmental sustainability. For example the creators of the ESI claim that ESI is related to GDP/capita (adjusted for Purchasing Power Parity) such that the ESI increases linearly with wealth. While this may in a sense be a comforting picture, do the variables within the ESI allow for alternatives to the story, and if they do then what are the repercussions for those producing such indices for broad consumption amongst the policy makers, mangers, the press, etc.? The latter point is especially important given the appetite for such indices amongst non-specialists, and for all their weaknesses the ESI and other such aggregated indices will not go away. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Field studies were carried out on the water and sediment dynamics in the tropical, macro-tidal, Daly Estuary. The estuary is shallow, very-turbid, about 100 km long, and the entrance is funnel-shape. In the wet, high flow season, normal tidal ranges can be suppressed in the estuary, depending on inflow rates, and freshwater becomes dominant up to the mouth. At that time a fraction of the fine sediment load is exported offshore as a bottom-tagging nepheloid layer after the sediment falls out of suspension of the thin, near-surface, river plume. The remaining fraction and the riverine coarse sediment form a large sediment bar 10 km long, up to 6 m in height and extending across the whole width of the channel near the mouth. This bar, as well as shoals in the estuary, partially pond the mid- to upper-estuary. This bar builds up from the deposition of riverine sediment during a wet season with high runoff and can raise mean water level by up to 2 m in the upper estuary in the low flow season. This ponding effect takes about three successive dry years to disappear by the sediment forming the bar being redistributed all over the estuary by tidal pumping of fine and coarse sediment in the dry season, which is the low flow season. The swift reversal of the tidal currents from ebb to flood results in macro-turbulence that lasts about 20 min. Bed load transport is preferentially landward and occurs only for water currents greater than 0.6 m s(-1). This high value of the threshold velocity suggests that the sand may be cemented by the mud. The Daly Estuary thus is a leaky sediment trap with an efficiency varying both seasonally and inter-annually. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
A vertical conduction current flows in the atmosphere as a result of the global atmospheric electric circuit. The current at the surface consists of the conduction current and a locally generated displacement current, which are often approximately equal in magnitude. A method of separating the two currents using two collectors of different geometry is investigated. The picoammeters connected to the collectors have a RC time constant of approximately 3 s, permitting the investigation of higher frequency air-earth current changes than previously achieved. The displacement current component of the air-earth current derived from the instrument agrees with calculations using simultaneous data from a co-located fast response electric field mill. The mean value of the nondisplacement current measured over 9 h was 1.76 +/- 0.002 pA m(-2). (c) 2006 American Institute of Physics.
Resumo:
This paper considers the problem of estimation when one of a number of populations, assumed normal with known common variance, is selected on the basis of it having the largest observed mean. Conditional on selection of the population, the observed mean is a biased estimate of the true mean. This problem arises in the analysis of clinical trials in which selection is made between a number of experimental treatments that are compared with each other either with or without an additional control treatment. Attempts to obtain approximately unbiased estimates in this setting have been proposed by Shen [2001. An improved method of evaluating drug effect in a multiple dose clinical trial. Statist. Medicine 20, 1913–1929] and Stallard and Todd [2005. Point estimates and confidence regions for sequential trials involving selection. J. Statist. Plann. Inference 135, 402–419]. This paper explores the problem in the simple setting in which two experimental treatments are compared in a single analysis. It is shown that in this case the estimate of Stallard and Todd is the maximum-likelihood estimate (m.l.e.), and this is compared with the estimate proposed by Shen. In particular, it is shown that the m.l.e. has infinite expectation whatever the true value of the mean being estimated. We show that there is no conditionally unbiased estimator, and propose a new family of approximately conditionally unbiased estimators, comparing these with the estimators suggested by Shen.
Resumo:
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.
Resumo:
The differential phase (ΦDP) measured by polarimetric radars is recognized to be a very good indicator of the path integrated by rain. Moreover, if a linear relationship is assumed between the specific differential phase (KDP) and the specific attenuation (AH) and specific differential attenuation (ADP), then attenuation can easily be corrected. The coefficients of proportionality, γH and γDP, are, however, known to be dependent in rain upon drop temperature, drop shapes, drop size distribution, and the presence of large drops causing Mie scattering. In this paper, the authors extensively apply a physically based method, often referred to as the “Smyth and Illingworth constraint,” which uses the constraint that the value of the differential reflectivity ZDR on the far side of the storm should be low to retrieve the γDP coefficient. More than 30 convective episodes observed by the French operational C-band polarimetric Trappes radar during two summers (2005 and 2006) are used to document the variability of γDP with respect to the intrinsic three-dimensional characteristics of the attenuating cells. The Smyth and Illingworth constraint could be applied to only 20% of all attenuated rays of the 2-yr dataset so it cannot be considered the unique solution for attenuation correction in an operational setting but is useful for characterizing the properties of the strongly attenuating cells. The range of variation of γDP is shown to be extremely large, with minimal, maximal, and mean values being, respectively, equal to 0.01, 0.11, and 0.025 dB °−1. Coefficient γDP appears to be almost linearly correlated with the horizontal reflectivity (ZH), differential reflectivity (ZDR), and specific differential phase (KDP) and correlation coefficient (ρHV) of the attenuating cells. The temperature effect is negligible with respect to that of the microphysical properties of the attenuating cells. Unusually large values of γDP, above 0.06 dB °−1, often referred to as “hot spots,” are reported for 15%—a nonnegligible figure—of the rays presenting a significant total differential phase shift (ΔϕDP > 30°). The corresponding strongly attenuating cells are shown to have extremely high ZDR (above 4 dB) and ZH (above 55 dBZ), very low ρHV (below 0.94), and high KDP (above 4° km−1). Analysis of 4 yr of observed raindrop spectra does not reproduce such low values of ρHV, suggesting that (wet) ice is likely to be present in the precipitation medium and responsible for the attenuation and high phase shifts. Furthermore, if melting ice is responsible for the high phase shifts, this suggests that KDP may not be uniquely related to rainfall rate but can result from the presence of wet ice. This hypothesis is supported by the analysis of the vertical profiles of horizontal reflectivity and the values of conventional probability of hail indexes.
Resumo:
The Cold War in the late 1940s blunted attempts by the Truman administration to extend the scope of government in areas such as health care and civil rights. In California, the combined weakness of the Democratic Party in electoral politics and the importance of fellow travelers and communists in state liberal politics made the problem of how to advance the left at a time of heightened Cold War tensions particularly acute. Yet by the early 1960s a new generation of liberal politicians had gained political power in the Golden State and was constructing a greatly expanded welfare system as a way of cementing their hold on power. In this article I argue that the New Politics of the 1970s, shaped nationally by Vietnam and by the social upheavals of the 1960s over questions of race, gender, sexuality, and economic rights, possessed particular power in California because many activists drew on the longer-term experiences of a liberal politics receptive to earlier anti-Cold War struggles. A desire to use political involvement as a form of social networking had given California a strong Popular Front, and in some respects the power of new liberalism was an offspring of those earlier battles.
Resumo:
Statistical graphics are a fundamental, yet often overlooked, set of components in the repertoire of data analytic tools. Graphs are quick and efficient, yet simple instruments of preliminary exploration of a dataset to understand its structure and to provide insight into influential aspects of inference such as departures from assumptions and latent patterns. In this paper, we present and assess a graphical device for choosing a method for estimating population size in capture-recapture studies of closed populations. The basic concept is derived from a homogeneous Poisson distribution where the ratios of neighboring Poisson probabilities multiplied by the value of the larger neighbor count are constant. This property extends to the zero-truncated Poisson distribution which is of fundamental importance in capture–recapture studies. In practice however, this distributional property is often violated. The graphical device developed here, the ratio plot, can be used for assessing specific departures from a Poisson distribution. For example, simple contaminations of an otherwise homogeneous Poisson model can be easily detected and a robust estimator for the population size can be suggested. Several robust estimators are developed and a simulation study is provided to give some guidance on which should be used in practice. More systematic departures can also easily be detected using the ratio plot. In this paper, the focus is on Gamma mixtures of the Poisson distribution which leads to a linear pattern (called structured heterogeneity) in the ratio plot. More generally, the paper shows that the ratio plot is monotone for arbitrary mixtures of power series densities.
Resumo:
References (20)Cited By (1)Export CitationAboutAbstract Proper scoring rules provide a useful means to evaluate probabilistic forecasts. Independent from scoring rules, it has been argued that reliability and resolution are desirable forecast attributes. The mathematical expectation value of the score allows for a decomposition into reliability and resolution related terms, demonstrating a relationship between scoring rules and reliability/resolution. A similar decomposition holds for the empirical (i.e. sample average) score over an archive of forecast–observation pairs. This empirical decomposition though provides a too optimistic estimate of the potential score (i.e. the optimum score which could be obtained through recalibration), showing that a forecast assessment based solely on the empirical resolution and reliability terms will be misleading. The differences between the theoretical and empirical decomposition are investigated, and specific recommendations are given how to obtain better estimators of reliability and resolution in the case of the Brier and Ignorance scoring rule.
Resumo:
The global mean temperature in 2008 was slightly cooler than that in 2007; however, it still ranks within the 10 warmest years on record. Annual mean temperatures were generally well above average in South America, northern and southern Africa, Iceland, Europe, Russia, South Asia, and Australia. In contrast, an exceptional cold outbreak occurred during January across Eurasia and over southern European Russia and southern western Siberia. There has been a general increase in land-surface temperatures and in permafrost temperatures during the last several decades throughout the Arctic region, including increases of 1° to 2°C in the last 30 to 35 years in Russia. Record setting warm summer (JJA) air temperatures were observed throughout Greenland. The year 2008 was also characterized by heavy precipitation in a number of regions of northern South America, Africa, and South Asia. In contrast, a prolonged and intense drought occurred during most of 2008 in northern Argentina, Paraguay, Uruguay, and southern Brazil, causing severe impacts to agriculture and affecting many communities. The year began with a strong La Niña episode that ended in June. Eastward surface current anomalies in the tropical Pacific Ocean in early 2008 played a major role in adjusting the basin from strong La Niña conditions to ENSO-neutral conditions by July–August, followed by a return to La Niña conditions late in December. The La Niña conditions resulted in far-reaching anomalies such as a cooling in the central tropical Pacific, Arctic Ocean, and the regions extending from the Gulf of Alaska to the west coast of North America; changes in the sea surface salinity and heat content anomalies in the tropics; and total column water vapor, cloud cover, tropospheric temperature, and precipitation patterns typical of a La Niña. Anomalously salty ocean surface salinity values in climatologically drier locations and anomalously fresh values in rainier locations observed in recent years generally persisted in 2008, suggesting an increase in the hydrological cycle. The 2008 Atlantic hurricane season was the 14th busiest on record and the only season ever recorded with major hurricanes each month from July through November. Conversely, activity in the northwest Pacific was considerably below normal during 2008. While activity in the north Indian Ocean was only slightly above average, the season was punctuated by Cyclone Nargis, which killed over 145,000 people; in addition, it was the seventh-strongest cyclone ever in the basin and the most devastating to hit Asia since 1991. Greenhouse gas concentrations continued to rise, increasing by more than expected based on with CO2 the 1979 to 2007 trend. In the oceans, the global mean uptake for 2007 is estimated to be 1.67 Pg-C, about CO2 0.07 Pg-C lower than the long-term average, making it the third-largest anomaly determined with this method since 1983, with the largest uptake of carbon over the past decade coming from the eastern Indian Ocean. Global phytoplankton chlorophyll concentrations were slightly elevated in 2008 relative to 2007, but regional changes were substantial (ranging to about 50%) and followed long-term patterns of net decreases in chlorophyll with increasing sea surface temperature. Ozone-depleting gas concentrations continued to fall globally to about 4% below the peak levels of the 2000–02 period. Total column ozone concentrations remain well below pre-1980, levels and the 2008 ozone hole was unusually large (sixth worst on record) and persistent, with low ozone values extending into the late December period. In fact the polar vortex in 2008 persisted longer than for any previous year since 1979. Northern Hemisphere snow cover extent for the year was well below average due in large part to the record-low ice extent in March and despite the record-maximum coverage in January and the shortest snow cover duration on record (which started in 1966) in the North American Arctic. Limited preliminary data imply that in 2008 glaciers continued to lose mass, and full data for 2007 show it was the 17th consecutive year of loss. The northern region of Greenland and adjacent areas of Arctic Canada experienced a particularly intense melt season, even though there was an abnormally cold winter across Greenland's southern half. One of the most dramatic signals of the general warming trend was the continued significant reduction in the extent of the summer sea-ice cover and, importantly, the decrease in the amount of relatively older, thicker ice. The extent of the 2008 summer sea-ice cover was the second-lowest value of the satellite record (which started in 1979) and 36% below the 1979–2000 average. Significant losses in the mass of ice sheets and the area of ice shelves continued, with several fjords on the northern coast of Ellesmere Island being ice free for the first time in 3,000–5,500 years. In Antarctica, the positive phase of the SAM led to record-high total sea ice extent for much of early 2008 through enhanced equatorward Ekman transport. With colder continental temperatures at this time, the 2007–08 austral summer snowmelt season was dramatically weakened, making it the second shortest melt season since 1978 (when the record began). There was strong warming and increased precipitation along the Antarctic Peninsula and west Antarctica in 2008, and also pockets of warming along coastal east Antarctica, in concert with continued declines in sea-ice concentration in the Amundsen/Bellingshausen Seas. One significant event indicative of this warming was the disintegration and retreat of the Wilkins Ice Shelf in the southwest peninsula area of Antarctica.
Resumo:
The analysis step of the (ensemble) Kalman filter is optimal when (1) the distribution of the background is Gaussian, (2) state variables and observations are related via a linear operator, and (3) the observational error is of additive nature and has Gaussian distribution. When these conditions are largely violated, a pre-processing step known as Gaussian anamorphosis (GA) can be applied. The objective of this procedure is to obtain state variables and observations that better fulfil the Gaussianity conditions in some sense. In this work we analyse GA from a joint perspective, paying attention to the effects of transformations in the joint state variable/observation space. First, we study transformations for state variables and observations that are independent from each other. Then, we introduce a targeted joint transformation with the objective to obtain joint Gaussianity in the transformed space. We focus primarily in the univariate case, and briefly comment on the multivariate one. A key point of this paper is that, when (1)-(3) are violated, using the analysis step of the EnKF will not recover the exact posterior density in spite of any transformations one may perform. These transformations, however, provide approximations of different quality to the Bayesian solution of the problem. Using an example in which the Bayesian posterior can be analytically computed, we assess the quality of the analysis distributions generated after applying the EnKF analysis step in conjunction with different GA options. The value of the targeted joint transformation is particularly clear for the case when the prior is Gaussian, the marginal density for the observations is close to Gaussian, and the likelihood is a Gaussian mixture.
Resumo:
In this paper the origin and evolution of the Sun’s open magnetic flux is considered by conducting magnetic flux transport simulations over many solar cycles. The simulations include the effects of differential rotation, meridional flow and supergranular diffusion on the radial magnetic field at the surface of the Sun as new magnetic bipoles emerge and are transported poleward. In each cycle the emergence of roughly 2100 bipoles is considered. The net open flux produced by the surface distribution is calculated by constructing potential coronal fields with a source surface from the surface distribution at regular intervals. In the simulations the net open magnetic flux closely follows the total dipole component at the source surface and evolves independently from the surface flux. The behaviour of the open flux is highly dependent on meridional flow and many observed features are reproduced by the model. However, when meridional flow is present at observed values the maximum value of the open flux occurs at cycle minimum when the polar caps it helps produce are the strongest. This is inconsistent with observations by Lockwood, Stamper and Wild (1999) and Wang, Sheeley, and Lean (2000) who find the open flux peaking 1–2 years after cycle maximum. Only in unrealistic simulations where meridional flow is much smaller than diffusion does a maximum in open flux consistent with observations occur. It is therefore deduced that there is no realistic parameter range of the flux transport variables that can produce the correct magnitude variation in open flux under the present approximations. As a result the present standard model does not contain the correct physics to describe the evolution of the Sun’s open magnetic flux over an entire solar cycle. Future possible improvements in modeling are suggested.