924 resultados para Geophysical instruments
Resumo:
Objective: Community-based care for mental disorders places considerable burden on families and carers. Measuring their experiences has become a priority, but there is no consensus on appropriate instruments. We aimed to review instruments carers consider relevant to their needs and assess evidence for their use. Method: A literature search was conducted for outcome measures used with mental health carers. Identified instruments were assessed for their relevance to the outcomes identified by carers and their psychometric properties. Results: Three hundred and ninety two published articles referring to 241 outcome measures were identified, 64 of which were eligible for review (used in three or more studies). Twenty-six instruments had good psychometric properties; they measured (i) carers' well-being, (ii) the experience of caregiving and (iii) carers' needs for professional support. Conclusion: Measures exist which have been used to assess the most salient aspects of carer outcome in mental health. All require further work to establish their psychometric properties fully.
Resumo:
Background Screening instruments for autistic-spectrum disorders have not been compared in the same sample. Aims To compare the Social Communication Questionnaire (SCQ), the Social Responsiveness Scale (SRS) and the Children's Communication Checklist (CCC). Method Screen and diagnostic assessments on 119 children between 9 and 13 years of age with special educational needs with and without autistic-spectrum disorders were weighted to estimate screen characteristics for a realistic target population. Results The SCQ performed best (area under receiver operating characteristic curve (AUC)=0.90; sensitivity. 6; specificity 0.78). The SRS had a lower AUC (0.77) with high sensitivity (0.78) and moderate specificity (0.67). The CCC had a high sensitivity but lower specificity (AUC=0.79; sensitivity 0.93; specificity 0.46). The AUC of the SRS and CCC was lower for children with IQ < 70. Behaviour problems reduced specificity for all three instruments. Conclusions The SCQ, SRS and CCC showed strong to moderate ability to identify autistic-spectrum disorder in this at-risk sample of school-age children with special educational needs.
Resumo:
An updated analysis of observed stratospheric temperature variability and trends is presented on the basis of satellite, radiosonde, and lidar observations. Satellite data include measurements from the series of NOAA operational instruments, including the Microwave Sounding Unit covering 1979–2007 and the Stratospheric Sounding Unit (SSU) covering 1979–2005. Radiosonde results are compared for six different data sets, incorporating a variety of homogeneity adjustments to account for changes in instrumentation and observational practices. Temperature changes in the lower stratosphere show cooling of 0.5 K/decade over much of the globe for 1979–2007, with some differences in detail among the different radiosonde and satellite data sets. Substantially larger cooling trends are observed in the Antarctic lower stratosphere during spring and summer, in association with development of the Antarctic ozone hole. Trends in the lower stratosphere derived from radiosonde data are also analyzed for a longer record (back to 1958); trends for the presatellite era (1958–1978) have a large range among the different homogenized data sets, implying large trend uncertainties. Trends in the middle and upper stratosphere have been derived from updated SSU data, taking into account changes in the SSU weighting functions due to observed atmospheric CO2 increases. The results show mean cooling of 0.5–1.5 K/decade during 1979–2005, with the greatest cooling in the upper stratosphere near 40–50 km. Temperature anomalies throughout the stratosphere were relatively constant during the decade 1995–2005. Long records of lidar temperature measurements at a few locations show reasonable agreement with SSU trends, although sampling uncertainties are large in the localized lidar measurements. Updated estimates of the solar cycle influence on stratospheric temperatures show a statistically significant signal in the tropics (30N–S), with an amplitude (solar maximum minus solar minimum) of 0.5 K (lower stratosphere) to 1.0 K (upper stratosphere).
Resumo:
Svalgaard and Cliver (2010) recently reported a consensus between the various reconstructions of the heliospheric field over recent centuries. This is a significant development because, individually, each has uncertainties introduced by instrument calibration drifts, limited numbers of observatories, and the strength of the correlations employed. However, taken collectively, a consistent picture is emerging. We here show that this consensus extends to more data sets and methods than reported by Svalgaard and Cliver, including that used by Lockwood et al. (1999), when their algorithm is used to predict the heliospheric field rather than the open solar flux. One area where there is still some debate relates to the existence and meaning of a floor value to the heliospheric field. From cosmogenic isotope abundances, Steinhilber et al. (2010) have recently deduced that the near-Earth IMF at the end of the Maunder minimum was 1.80 ± 0.59 nT which is considerably lower than the revised floor of 4nT proposed by Svalgaard and Cliver. We here combine cosmogenic and geomagnetic reconstructions and modern observations (with allowance for the effect of solar wind speed and structure on the near-Earth data) to derive an estimate for the open solar flux of (0.48 ± 0.29) × 1014 Wb at the end of the Maunder minimum. By way of comparison, the largest and smallest annual means recorded by instruments in space between 1965 and 2010 are 5.75 × 1014 Wb and 1.37 × 1014 Wb, respectively, set in 1982 and 2009, and the maximum of the 11 year running means was 4.38 × 1014 Wb in 1986. Hence the average open solar flux during the Maunder minimum is found to have been 11% of its peak value during the recent grand solar maximum.
Resumo:
We use microwave retrievals of upper tropospheric humidity (UTH) to estimate the impact of clear-sky-only sampling by infrared instruments on the distribution, variability and trends in UTH. Our method isolates the impact of the clear-sky-only sampling, without convolving errors from other sources. On daily time scales IR-sampled UTH contains large data gaps in convectively active areas, with only about 20-30 % of the tropics (30 S 30 N) being sampled. This results in a dry bias of about -9 %RH in the area-weighted tropical daily UTH time series. On monthly scales, maximum clear-sky bias (CSB) is up to -30 %RH over convectively active areas. The magnitude of CSB shows significant correlations with UTH itself (-0.5) and also with the variability in UTH (-0.6). We also show that IR-sampled UTH time series have higher interannual variability and smaller trends compared to microwave sampling. We argue that a significant part of the smaller trend results from the contrasting influence of diurnal drift in the satellite measurements on the wet and dry regions of the tropics.
Resumo:
A detailed analysis is presented of solar UV spectral irradiance for the period between May 2003 and August 2005, when data are available from both the Solar Ultraviolet pectral Irradiance Monitor (SUSIM) instrument (on board the pper Atmosphere Research Satellite (UARS) spacecraft) and the Solar Stellar Irradiance Comparison Experiment (SOLSTICE) instrument (on board the Solar Radiation and Climate Experiment (SORCE) satellite). The ultimate aim is to develop a data composite that can be used to accurately determine any differences between the “exceptional” solar minimum at the end of solar cycle 23 and the previous minimum at the end of solar cycle 22 without having to rely on proxy data to set the long‐term change. SUSIM data are studied because they are the only data available in the “SOLSTICE gap” between the end of available UARS SOLSTICE data and the start of the SORCE data. At any one wavelength the two data sets are considered too dissimilar to be combined into a meaningful composite if any one of three correlations does not exceed a threshold of 0.8. This criterion removes all wavelengths except those in a small range between 156 nm and 208 nm, the longer wavelengths of which influence ozone production and heating in the lower stratosphere. Eight different methods are employed to intercalibrate the two data sequences. All methods give smaller changes between the minima than are seen when the data are not adjusted; however, correcting the SUSIM data to allow for an exponentially decaying offset drift gives a composite that is largely consistent with the unadjusted data from the SOLSTICE instruments on both UARS and SORCE and in which the recent minimum is consistently lower in the wave band studied.
Resumo:
New ways of combining observations with numerical models are discussed in which the size of the state space can be very large, and the model can be highly nonlinear. Also the observations of the system can be related to the model variables in highly nonlinear ways, making this data-assimilation (or inverse) problem highly nonlinear. First we discuss the connection between data assimilation and inverse problems, including regularization. We explore the choice of proposal density in a Particle Filter and show how the ’curse of dimensionality’ might be beaten. In the standard Particle Filter ensembles of model runs are propagated forward in time until observations are encountered, rendering it a pure Monte-Carlo method. In large-dimensional systems this is very inefficient and very large numbers of model runs are needed to solve the data-assimilation problem realistically. In our approach we steer all model runs towards the observations resulting in a much more efficient method. By further ’ensuring almost equal weight’ we avoid performing model runs that are useless in the end. Results are shown for the 40 and 1000 dimensional Lorenz 1995 model.
Resumo:
Developing high-quality scientific research will be most effective if research communities with diverse skills and interests are able to share information and knowledge, are aware of the major challenges across disciplines, and can exploit economies of scale to provide robust answers and better inform policy. We evaluate opportunities and challenges facing the development of a more interactive research environment by developing an interdisciplinary synthesis of research on a single geographic region. We focus on the Amazon as it is of enormous regional and global environmental importance and faces a highly uncertain future. To take stock of existing knowledge and provide a framework for analysis we present a set of mini-reviews from fourteen different areas of research, encompassing taxonomy, biodiversity, biogeography, vegetation dynamics, landscape ecology, earth-atmosphere interactions, ecosystem processes, fire, deforestation dynamics, hydrology, hunting, conservation planning, livelihoods, and payments for ecosystem services. Each review highlights the current state of knowledge and identifies research priorities, including major challenges and opportunities. We show that while substantial progress is being made across many areas of scientific research, our understanding of specific issues is often dependent on knowledge from other disciplines. Accelerating the acquisition of reliable and contextualized knowledge about the fate of complex pristine and modified ecosystems is partly dependent on our ability to exploit economies of scale in shared resources and technical expertise, recognise and make explicit interconnections and feedbacks among sub-disciplines, increase the temporal and spatial scale of existing studies, and improve the dissemination of scientific findings to policy makers and society at large. Enhancing interaction among research efforts is vital if we are to make the most of limited funds and overcome the challenges posed by addressing large-scale interdisciplinary questions. Bringing together a diverse scientific community with a single geographic focus can help increase awareness of research questions both within and among disciplines, and reveal the opportunities that may exist for advancing acquisition of reliable knowledge. This approach could be useful for a variety of globally important scientific questions.
Resumo:
Upper air observations from radiosondes and microwave satellite instruments does not indicate any global warming during the last 19 years, contrary to surface measurements, where a warming trend is supposedly being found. This result is somewhat difficult to reconcile, since climate model experiments do indicate a reverse trend, namely, that upper tropospheric air should warm faster than the surface. To contribute toward an understanding of this difficulty, we have here undertaken some specific experiments to study the effect on climate due to the decrease in stratospheric ozone and the Mount Pinatubo eruption in 1991. The associated forcing was added to the forcing from greenhouse gases, sulfate aerosols (direct and indirect effect), and tropospheric ozone, which was investigated in a separate series of experiments. Furthermore, we have undertaken an ensemble study in order to explore the natural variability of an advanced climate model exposed to such a forcing over 19 years. The result shows that the reduction of stratospheric ozone cools not only the lower stratosphere but also the troposphere, in particular, the upper and middle part. In the upper troposphere the cooling from stratospheric ozone leads to a significant reduction of greenhouse warming. The modeled stratospheric aerosols from Mount Pinatubo generate a climate response (stratospheric warming and tropospheric cooling) in good agreement with microwave satellite measurements. Finally, analysis of a series of experiments with both stratospheric ozone and the Mount Pinatubo effect shows considerable variability in climate response, suggesting that an evolution having no warming in the period is as likely as another evolution showing modest warming. However, the observed trend of no warming in the midtroposphere and clear warming at the surface is not found in the model simulations.
Resumo:
Correlations between various chemical species simulated by the Canadian Middle Atmosphere Model, a general circulation model with fully interactive chemistry, are considered in order to investigate the general conditions under which compact correlations can be expected to form. At the same time, the analysis serves to validate the model. The results are compared to previous work on this subject, both from theoretical studies and from atmospheric measurements made from space and from aircraft. The results highlight the importance of having a data set with good spatial coverage when working with correlations and provide a background against which the compactness of correlations obtained from atmospheric measurements can be confirmed. It is shown that for long-lived species, distinct correlations are found in the model in the tropics, the extratropics, and the Antarctic winter vortex. Under these conditions, sparse sampling such as arises from occultation instruments is nevertheless suitable to define a chemical correlation within each region even from a single day of measurements, provided a sufficient range of mixing ratio values is sampled. In practice, this means a large vertical extent, though the requirements are less stringent at more poleward latitudes.