68 resultados para linked open data
Resumo:
Data assimilation (DA) systems are evolving to meet the demands of convection-permitting models in the field of weather forecasting. On 19 April 2013 a special interest group meeting of the Royal Meteorological Society brought together UK researchers looking at different aspects of the data assimilation problem at high resolution, from theory to applications, and researchers creating our future high resolution observational networks. The meeting was chaired by Dr Sarah Dance of the University of Reading and Dr Cristina Charlton-Perez from the MetOffice@Reading. The purpose of the meeting was to help define the current state of high resolution data assimilation in the UK. The workshop assembled three main types of scientists: observational network specialists, operational numerical weather prediction researchers and those developing the fundamental mathematical theory behind data assimilation and the underlying models. These three working areas are intrinsically linked; therefore, a holistic view must be taken when discussing the potential to make advances in high resolution data assimilation.
Resumo:
Background: Massive Open Online Courses (MOOCs) have become immensely popular in a short span of time. However, there is very little research exploring MOOCs in the discipline of Health and Medicine. This paper is aimed to fill this void by providing a review of Health and Medicine related MOOCs. Objective: Provide a review of Health and Medicine related MOOCs offered by various MOOC platforms within the year 2013. Analyze and compare the various offerings, their target audience, typical length of a course and credentials offered. Discuss opportunities and challenges presented by MOOCs in the discipline of Health and Medicine. Methods: Health and Medicine related MOOCs were gathered using several methods to ensure the richness and completeness of data. Identified MOOC platform websites were used to gather the lists of offerings. In parallel, these MOOC platforms were contacted to access official data on their offerings. Two MOOC aggregator sites (Class Central and MOOC List) were also consulted to gather data on MOOC offerings. Eligibility criteria were defined to concentrate on the courses that were offered in 2013 and primarily on the subject ‘Health and Medicine’. All language translations in this paper were achieved using Google Translate. Results: The search identified 225 courses out of which 98 were eligible for the review (n = 98). 58% (57) of the MOOCs considered were offered on the Coursera platform and 94% (92) of all the MOOCs were offered in English. 90 MOOCs were offered by universities and the John Hopkins University offered the largest number of MOOCs (12). Only three MOOCs were offered by developing countries (China, West Indies, and Saudi Arabia). The duration of MOOCs varied from three weeks to 20 weeks with an average length of 6.7 weeks. On average MOOCs expected a participant to work on the material for 4.2 hours a week. Verified Certificates were offered by 14 MOOCs while three others offered other professional recognition. Conclusions: The review presents evidence to suggest that MOOCs can be used as a way to provide continuous medical education. It also shows the potential of MOOCs as a means of increasing health literacy among the public.
Resumo:
Variational data assimilation is commonly used in environmental forecasting to estimate the current state of the system from a model forecast and observational data. The assimilation problem can be written simply in the form of a nonlinear least squares optimization problem. However the practical solution of the problem in large systems requires many careful choices to be made in the implementation. In this article we present the theory of variational data assimilation and then discuss in detail how it is implemented in practice. Current solutions and open questions are discussed.
Resumo:
Population modelling is increasingly recognised as a useful tool for pesticide risk assessment. For vertebrates that may ingest pesticides with their food, such as woodpigeon (Columba palumbus), population models that simulate foraging behaviour explicitly can help predicting both exposure and population-level impact. Optimal foraging theory is often assumed to explain the individual-level decisions driving distributions of individuals in the field, but it may not adequately predict spatial and temporal characteristics of woodpigeon foraging because of the woodpigeons’ excellent memory, ability to fly long distances, and distinctive flocking behaviour. Here we present an individual-based model (IBM) of the woodpigeon. We used the model to predict distributions of foraging woodpigeons that use one of six alternative foraging strategies: optimal foraging, memory-based foraging and random foraging, each with or without flocking mechanisms. We used pattern-oriented modelling to determine which of the foraging strategies is best able to reproduce observed data patterns. Data used for model evaluation were gathered during a long-term woodpigeon study conducted between 1961 and 2004 and a radiotracking study conducted in 2003 and 2004, both in the UK, and are summarised here as three complex patterns: the distributions of foraging birds between vegetation types during the year, the number of fields visited daily by individuals, and the proportion of fields revisited by them on subsequent days. The model with a memory-based foraging strategy and a flocking mechanism was the only one to reproduce these three data patterns, and the optimal foraging model produced poor matches to all of them. The random foraging strategy reproduced two of the three patterns but was not able to guarantee population persistence. We conclude that with the memory-based foraging strategy including a flocking mechanism our model is realistic enough to estimate the potential exposure of woodpigeons to pesticides. We discuss how exposure can be linked to our model, and how the model could be used for risk assessment of pesticides, for example predicting exposure and effects in heterogeneous landscapes planted seasonally with a variety of crops, while accounting for differences in land use between landscapes.
Resumo:
A framework for understanding the complexity of cancer development was established by Hanahan and Weinberg in their definition of the hallmarks of cancer. In this review, we consider the evidence that parabens can enable development in human breast epithelial cells of 4/6 of the basic hallmarks, 1/2 of the emerging hallmarks and 1/2 of the enabling characteristics. Hallmark 1: parabens have been measured as present in 99% of human breast tissue samples, possess oestrogenic activity and can stimulate sustained proliferation of human breast cancer cells at concentrations measurable in the breast. Hallmark 2: parabens can inhibit the suppression of breast cancer cell growth by hydroxytamoxifen, and through binding to the oestrogen-related receptor gamma (ERR) may prevent its deactivation by growth inhibitors. Hallmark 3: in the 10nM to 1M range, parabens give a dose-dependent evasion of apoptosis in high-risk donor breast epithelial cells. Hallmark 4: long-term exposure (>20weeks) to parabens leads to increased migratory and invasive activity in human breast cancer cells, properties which are linked to the metastatic process. Emerging hallmark: methylparaben has been shown in human breast epithelial cells to increase mTOR, a key regulator of energy metabolism. Enabling characteristic: parabens can cause DNA damage at high concentrations in the short term but more work is needed to investigate long-term low-doses of mixtures. The ability of parabens to enable multiple cancer hallmarks in human breast epithelial cells provides grounds for regulatory review of the implications of the presence of parabens in human breast tissue.
Resumo:
Chongqing is the largest central-government-controlled municipality in China, which is now under going a rapid urbanization. The question remains open: What are the consequences of such rapid urbanization in Chongqing in terms of urban microclimates? An integrated study comprising three different research approaches is adopted in the present paper. By analyzing the observed annual climate data, an average rising trend of 0.10◦C/decade was found for the annual mean temperature from 1951 to 2010 in Chongqing,indicating a higher degree of urban warming in Chongqing. In addition, two complementary types of field measurements were conducted: fixed weather stations and mobile transverse measurement. Numerical simulations using a house-developed program are able to predict the urban air temperature in Chongqing.The urban heat island intensity in Chongqing is stronger in summer compared to autumn and winter.The maximum urban heat island intensity occurs at around midnight, and can be as high as 2.5◦C. In the day time, an urban cool island exists. Local greenery has a great impact on the local thermal environment.Urban green spaces can reduce urban air temperature and therefore mitigate the urban heat island. The cooling effect of an urban river is limited in Chongqing, as both sides of the river are the most developed areas, but the relative humidity is much higher near the river compared with the places far from it.
Resumo:
Results from all phases of the orbits of the Ulysses spacecraft have shown that the magnitude of the radial component of the heliospheric field is approximately independent of heliographic latitude. This result allows the use of near- Earth observations to compute the total open flux of the Sun. For example, using satellite observations of the interplanetary magnetic field, the average open solar flux was shown to have risen by 29% between 1963 and 1987 and using the aa geomagnetic index it was found to have doubled during the 20th century. It is therefore important to assess fully the accuracy of the result and to check that it applies to all phases of the solar cycle. The first perihelion pass of the Ulysses spacecraft was close to sunspot minimum, and recent data from the second perihelion pass show that the result also holds at solar maximum. The high level of correlation between the open flux derived from the various methods strongly supports the Ulysses discovery that the radial field component is independent of latitude. We show here that the errors introduced into open solar flux estimates by assuming that the heliospheric field’s radial component is independent of latitude are similar for the two passes and are of order 25% for daily values, falling to 5% for averaging timescales of 27 days or greater. We compare here the results of four methods for estimating the open solar flux with results from the first and second perehelion passes by Ulysses. We find that the errors are lowest (1–5% for averages over the entire perehelion passes lasting near 320 days), for near-Earth methods, based on either interplanetary magnetic field observations or the aa geomagnetic activity index. The corresponding errors for the Solanki et al. (2000) model are of the order of 9–15% and for the PFSS method, based on solar magnetograms, are of the order of 13–47%. The model of Solanki et al. is based on the continuity equation of open flux, and uses the sunspot number to quantify the rate of open flux emergence. It predicts that the average open solar flux has been decreasing since 1987, as Correspondence to: M. Lockwood (m.lockwood@rl.ac.uk) is observed in the variation of all the estimates of the open flux. This decline combines with the solar cycle variation to produce an open flux during the second (sunspot maximum) perihelion pass of Ulysses which is only slightly larger than that during the first (sunspot minimum) perihelion pass.
Resumo:
This paper presents a comparison of various estimates of the open solar flux, deduced from measurements of the interplanetary magnetic field, from the aa geomagnetic index and from photospheric magnetic field observations. The first two of these estimates are made using the Ulysses discovery that the radial heliospheric field is approximately independent of heliographic latitude, the third makes use of the potential-field source surface method to map the total flux through the photosphere to the open flux at the top of the corona. The uncertainties associated with using the Ulysses result are 5%, but the effects of the assumptions of the potential field source surface method are harder to evaluate. Nevertheless, the three methods give similar results for the last three solar cycles when the data sets overlap. In 11-year running means, all three methods reveal that 1987 marked a significant peak in the long-term variation of the open solar flux. This peak is close to the solar minimum between sunspot cycles 21 and 22, and consequently the mean open flux (averaged from minimum to minimum) is similar for these two cycles. However, this similarity between cycles 21 and 22 in no way implies that the open flux is constant. The long-term variation shows that these cycles are fundamentally different in that the average open flux was rising during cycle 21 (from consistently lower values in cycle 20 and toward the peak in 1987) but was falling during cycle 22 (toward consistently lower values in cycle 23). The estimates from the geomagnetic aa index are unique as they extend from 1842 onwards (using the Helsinki extension). This variation gives strong anticorrelations, with very high statistical significance levels, with cosmic ray fluxes and with the abundances of the cosmogenic isotopes that they produce. Thus observations of photospheric magnetic fields, of cosmic ray fluxes, and of cosmogenic isotope abundances all support the long-term drifts in open solar flux reported by Lockwood et al. [1999a, 1999b].
Resumo:
In this paper the origin and evolution of the Sun’s open magnetic flux are considered for single magnetic bipoles as they are transported across the Sun. The effects of magnetic flux transport on the radial field at the surface of the Sun are modeled numerically by developing earlier work by Wang, Sheeley, and Lean (2000). The paper considers how the initial tilt of the bipole axis (α) and its latitude of emergence affect the variation and magnitude of the surface and open magnetic flux. The amount of open magnetic flux is estimated by constructing potential coronal fields. It is found that the open flux may evolve independently from the surface field for certain ranges of the tilt angle. For a given tilt angle, the lower the latitude of emergence, the higher the magnitude of the surface and open flux at the end of the simulation. In addition, three types of behavior are found for the open flux depending on the initial tilt angle of the bipole axis. When the tilt is such that α ≥ 2◦ the open flux is independent of the surface flux and initially increases before decaying away. In contrast, for tilt angles in the range −16◦ < α < 2◦ the open flux follows the surface flux and continually decays. Finally, for α ≤ −16◦ the open flux first decays and then increases in magnitude towards a second maximum before decaying away. This behavior of the open flux can be explained in terms of two competing effects produced by differential rotation. Firstly, differential rotation may increase or decrease the open flux by rotating the centers of each polarity of the bipole at different rates when the axis has tilt. Secondly, it decreases the open flux by increasing the length of the polarity inversion line where flux cancellation occurs. The results suggest that, in order to reproduce a realistic model of the Sun’s open magnetic flux over a solar cycle, it is important to have accurate input data on the latitude of emergence of bipoles along with the variation of their tilt angles as the cycle progresses.
Resumo:
The correlation between the coronal source flux F_{S} and the total solar irradiance I_{TS} is re-evaluated in the light of an additional 5 years' data from the rising phase of solar cycle 23 and also by using cosmic ray fluxes detected at Earth. Tests on monthly averages show that the correlation with F_{S} deduced from the interplanetary magnetic field (correlation coefficient, r = 0.62) is highly significant (99.999%), but that there is insufficient data for the higher correlation with annual means (r = 0.80) to be considered significant. Anti-correlations between I_{TS} and cosmic ray fluxes are found in monthly data for all stations and geomagnetic rigidity cut-offs (r ranging from −0.63 to −0.74) and these have significance levels between 85% and 98%. In all cases, the t is poorest for the earliest data (i.e., prior to 1982). Excluding these data improves the anticorrelation with cosmic rays to r = −0:93 for one-year running means. Both the interplanetary magnetic field data and the cosmic ray fluxes indicate that the total solar irradiance lags behind the open solar flux with a delay that is estimated to have an optimum value of 2.8 months (and is within the uncertainty range 0.8-8.0 months at the 90% level).
Resumo:
Early in 1996, the latest of the European incoherent-scatter (EISCAT) radars came into operation on the Svalbard islands. The EISCAT Svalbard Radar (ESR) has been built in order to study the ionosphere in the northern polar cap and in particular, the dayside cusp. Conditions in the upper atmosphere in the cusp region are complex, with magnetosheath plasma cascading freely into the atmosphere along open magnetic field lines as a result of magnetic reconnection at the dayside magnetopause. A model has been developed to predict the effects of pulsed reconnection and the subsequent cusp precipitation in the ionosphere. Using this model we have successfully recreated some of the major features seen in photometer and satellite data within the cusp. In this paper, the work is extended to predict the signatures of pulsed reconnection in ESR data when the radar is pointed along the magnetic field. It is expected that enhancements in both electron concentration and electron temperature will be observed. Whether these enhancements are continuous in time or occur as a series of separate events is shown to depend critically on where the open/closed field-line boundary is with respect to the radar. This is shown to be particularly true when reconnection pulses are superposed on a steady background rate.
Resumo:
We analyze of ion populations observed by the NOAA-12 satellite within dayside auroral transients. The data are matched with an open magnetopause model which allows for the transmission of magnetosheath ions across one or both of the two Alfvén waves which emanate from the magnetopause reconnection site. It also allows for reflection and acceleration of ions of magnetospheric origin by these waves. From the good agreement found between the model and the observations, we propose that the events and the low-latitude boundary precipitation are both on open field lines.
Resumo:
We present an analysis of a cusp ion step, observed by the Defense Meteorological Satellite Program (DMSP) F10 spacecraft, between two poleward moving events of enhanced ionospheric electron temperature, observed by the European Incoherent Scatter (EISCAT) radar. From the ions detected by the satellite, the variation of the reconnection rate is computed for assumed distances along the open-closed field line separatrix from the satellite to the X line, do. Comparison with the onset times of the associated ionospheric events allows this distance to be estimated, but with an uncertainty due to the determination of the low-energy cutoff of the ion velocity distribution function, ƒ(ν). Nevertheless, the reconnection site is shown to be on the dayside magnetopause, consistent with the reconnection model of the cusp during southward interplanetary magnetic field (IMF). Analysis of the time series of distribution function at constant energies, ƒ(ts), shows that the best estimate of the distance do is 14.5±2 RE. This is consistent with various magnetopause observations of the signatures of reconnection for southward IMF. The ion precipitation is used to reconstruct the field-parallel part of the Cowley D ion distribution function injected into the open low-latitude boundary layer in the vicinity of the X line. From this reconstruction, the field-aligned component of the magnetosheath flow is found to be only −55±65 km s−1 near the X line, which means either that the reconnection X line is near the stagnation region at the nose of the magnetosphere, or that it is closely aligned with the magnetosheath flow streamline which is orthogonal to the magnetosheath field, or both. In addition, the sheath Alfvén speed at the X line is found to be 220±45 km s−1, and the speed with which newly opened field lines are ejected from the X line is 165±30 km s−1. We show that the inferred magnetic field, plasma density, and temperature of the sheath near the X line are consistent with a near-subsolar reconnection site and confirm that the magnetosheath field makes a large angle (>58°) with the X line.
Resumo:
The recent identification of non-thermal plasmas using EISCAT data has been made possible by their occurrence during large, short-lived flow bursts. For steady, yet rapid, ion convection the only available signature is the shape of the spectrum, which is unreliable because it is open to distortion by noise and sampling uncertainty and can be mimicked by other phenomena. Nevertheless, spectral shape does give an indication of the presence of non-thermal plasma, and the characteristic shape has been observed for long periods (of the order of an hour or more) in some experiments. To evaluate this type of event properly one needs to compare it to what would be expected theoretically. Predictions have been made using the coupled thermosphere-ionosphere model developed at University College London and the University of Sheffield to show where and when non-Maxwellian plasmas would be expected in the auroral zone. Geometrical and other factors then govern whether these are detectable by radar. The results are applicable to any incoherent scatter radar in this area, but the work presented here concentrates on predictions with regard to experiments on the EISCAT facility.
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.