197 resultados para Steven Moll
Resumo:
The amphiphilic polyene amphotericin B, a powerful treatment for systemic fungal infections, is shown to exhibit a critical aggregation concentration, and to form giant helically-twisted nanostructures via self-assembly in basic aqueous solution.
Resumo:
Background Appropriately conducted adaptive designs (ADs) offer many potential advantages over conventional trials. They make better use of accruing data, potentially saving time, trial participants, and limited resources compared to conventional, fixed sample size designs. However, one can argue that ADs are not implemented as often as they should be, particularly in publicly funded confirmatory trials. This study explored barriers, concerns, and potential facilitators to the appropriate use of ADs in confirmatory trials among key stakeholders. Methods We conducted three cross-sectional, online parallel surveys between November 2014 and January 2015. The surveys were based upon findings drawn from in-depth interviews of key research stakeholders, predominantly in the UK, and targeted Clinical Trials Units (CTUs), public funders, and private sector organisations. Response rates were as follows: 30(55 %) UK CTUs, 17(68 %) private sector, and 86(41 %) public funders. A Rating Scale Model was used to rank barriers and concerns in order of perceived importance for prioritisation. Results Top-ranked barriers included the lack of bridge funding accessible to UK CTUs to support the design of ADs, limited practical implementation knowledge, preference for traditional mainstream designs, difficulties in marketing ADs to key stakeholders, time constraints to support ADs relative to competing priorities, lack of applied training, and insufficient access to case studies of undertaken ADs to facilitate practical learning and successful implementation. Associated practical complexities and inadequate data management infrastructure to support ADs were reported as more pronounced in the private sector. For funders of public research, the inadequate description of the rationale, scope, and decision-making criteria to guide the planned AD in grant proposals by researchers were all viewed as major obstacles. Conclusions There are still persistent and important perceptions of individual and organisational obstacles hampering the use of ADs in confirmatory trials research. Stakeholder perceptions about barriers are largely consistent across sectors, with a few exceptions that reflect differences in organisations’ funding structures, experiences and characterisation of study interventions. Most barriers appear connected to a lack of practical implementation knowledge and applied training, and limited access to case studies to facilitate practical learning. Keywords: Adaptive designs; flexible designs; barriers; surveys; confirmatory trials; Phase 3; clinical trials; early stopping; interim analyses
Resumo:
We assess Indian summer monsoon seasonal forecasts in GloSea5-GC2, the Met Office fully coupled subseasonal to seasonal ensemble forecasting system. Using several metrics, GloSea5-GC2 shows similar skill to other state-of-the-art forecast systems. The prediction skill of the large-scale South Asian monsoon circulation is higher than that of Indian monsoon rainfall. Using multiple linear regression analysis we evaluate relationships between Indian monsoon rainfall and five possible drivers of monsoon interannual variability. Over the time period studied (1992-2011), the El Nino-Southern Oscillation (ENSO) and the Indian Ocean dipole (IOD) are the most important of these drivers in both observations and GloSea5-GC2. Our analysis indicates that ENSO and its teleconnection with the Indian rainfall are well represented in GloSea5-GC2. However, the relationship between the IOD and Indian rainfall anomalies is too weak in GloSea5-GC2, which may be limiting the prediction skill of the local monsoon circulation and Indian rainfall. We show that this weak relationship likely results from a coupled mean state bias that limits the impact of anomalous wind forcing on SST variability, resulting in erroneous IOD SST anomalies. Known difficulties in representing convective precipitation over India may also play a role. Since Indian rainfall responds weakly to the IOD, it responds more consistently to ENSO than in observations. Our assessment identifies specific coupled biases that are likely limiting GloSea5-GC2 prediction skill, providing targets for model improvement.
Resumo:
Idealized explicit convection simulations of the Met Office Unified Model exhibit spontaneous self-aggregation in radiative-convective equilibrium, as seen in other models in previous studies. This self-aggregation is linked to feedbacks between radiation, surface fluxes, and convection, and the organization is intimately related to the evolution of the column water vapor field. Analysis of the budget of the spatial variance of column-integrated frozen moist static energy (MSE), following Wing and Emanuel [2014], reveals that the direct radiative feedback (including significant cloud longwave effects) is dominant in both the initial development of self-aggregation and the maintenance of an aggregated state. A low-level circulation at intermediate stages of aggregation does appear to transport MSE from drier to moister regions, but this circulation is mostly balanced by other advective effects of opposite sign and is forced by horizontal anomalies of convective heating (not radiation). Sensitivity studies with either fixed prescribed radiative cooling, fixed prescribed surface fluxes, or both do not show full self-aggregation from homogeneous initial conditions, though fixed surface fluxes do not disaggregate an initialized aggregated state. A sensitivity study in which rain evaporation is turned off shows more rapid self-aggregation, while a run with this change plus fixed radiative cooling still shows strong self-aggregation, supporting a “moisture memory” effect found in Muller and Bony [2015]. Interestingly, self-aggregation occurs even in simulations with sea surface temperatures (SSTs) of 295 K and 290 K, with direct radiative feedbacks dominating the budget of MSE variance, in contrast to results in some previous studies.
Resumo:
Systematic review (SR) is a rigorous, protocol-driven approach designed to minimise error and bias when summarising the body of research evidence relevant to a specific scientific question. Taking as a comparator the use of SR in synthesising research in healthcare, we argue that SR methods could also pave the way for a “step change” in the transparency, objectivity and communication of chemical risk assessments (CRA) in Europe and elsewhere. We suggest that current controversies around the safety of certain chemicals are partly due to limitations in current CRA procedures which have contributed to ambiguity about the health risks posed by these substances. We present an overview of how SR methods can be applied to the assessment of risks from chemicals, and indicate how challenges in adapting SR methods from healthcare research to the CRA context might be overcome. Regarding the latter, we report the outcomes from a workshop exploring how to increase uptake of SR methods, attended by experts representing a wide range of fields related to chemical toxicology, risk analysis and SR. Priorities which were identified include: the conduct of CRA-focused prototype SRs; the development of a recognised standard of reporting and conduct for SRs in toxicology and CRA; and establishing a network to facilitate research, communication and training in SR methods. We see this paper as a milestone in the creation of a research climate that fosters communication between experts in CRA and SR and facilitates wider uptake of SR methods into CRA.
Resumo:
A recent field campaign in southwest England used numerical modeling integrated with aircraft and radar observations to investigate the dynamic and microphysical interactions that can result in heavy convective precipitation. The COnvective Precipitation Experiment (COPE) was a joint UK-US field campaign held during the summer of 2013 in the southwest peninsula of England, designed to study convective clouds that produce heavy rain leading to flash floods. The clouds form along convergence lines that develop regularly due to the topography. Major flash floods have occurred in the past, most famously at Boscastle in 2004. It has been suggested that much of the rain was produced by warm rain processes, similar to some flash floods that have occurred in the US. The overarching goal of COPE is to improve quantitative convective precipitation forecasting by understanding the interactions of the cloud microphysics and dynamics and thereby to improve NWP model skill for forecasts of flash floods. Two research aircraft, the University of Wyoming King Air and the UK BAe 146, obtained detailed in situ and remote sensing measurements in, around, and below storms on several days. A new fast-scanning X-band dual-polarization Doppler radar made 360-deg volume scans over 10 elevation angles approximately every 5 minutes, and was augmented by two UK Met Office C-band radars and the Chilbolton S-band radar. Detailed aerosol measurements were made on the aircraft and on the ground. This paper: (i) provides an overview of the COPE field campaign and the resulting dataset; (ii) presents examples of heavy convective rainfall in clouds containing ice and also in relatively shallow clouds through the warm rain process alone; and (iii) explains how COPE data will be used to improve high-resolution NWP models for operational use.
Resumo:
Liquidity is a fundamentally important facet of investments, but there is no single measure that quantifies it perfectly. Instead, a range of measures are necessary to capture different dimensions of liquidity such as the breadth and depth of markets, the costs of transacting, the speed with which transactions can occur and the resilience of prices to trading activity. This article considers how different dimensions have been measured in financial markets and for various forms of real estate investment. The purpose of this exercise is to establish the range of liquidity measures that could be used for real estate investments before considering which measures and questions have been investigated so far. Most measures reviewed here are applicable to public real estate, but not all can be applied to private real estate assets or funds. Use of a broader range of liquidity measures could help real estate researchers tackle issues such as quantification of illiquidity premiums for the real estate asset class or different types of real estate, and how liquidity differences might be incorporated into portfolio allocation models.
Resumo:
ISO19156 Observations and Measurements (O&M) provides a standardised framework for organising information about the collection of information about the environment. Here we describe the implementation of a specialisation of O&M for environmental data, the Metadata Objects for Linking Environmental Sciences (MOLES3). MOLES3 provides support for organising information about data, and for user navigation around data holdings. The implementation described here, “CEDA-MOLES”, also supports data management functions for the Centre for Environmental Data Archival, CEDA. The previous iteration of MOLES (MOLES2) saw active use over five years, being replaced by CEDA-MOLES in late 2014. During that period important lessons were learnt both about the information needed, as well as how to design and maintain the necessary information systems. In this paper we review the problems encountered in MOLES2; how and why CEDA-MOLES was developed and engineered; the migration of information holdings from MOLES2 to CEDA-MOLES; and, finally, provide an early assessment of MOLES3 (as implemented in CEDA-MOLES) and its limitations. Key drivers for the MOLES3 development included the necessity for improved data provenance, for further structured information to support ISO19115 discovery metadata export (for EU INSPIRE compliance), and to provide appropriate fixed landing pages for Digital Object Identifiers (DOIs) in the presence of evolving datasets. Key lessons learned included the importance of minimising information structure in free text fields, and the necessity to support as much agility in the information infrastructure as possible without compromising on maintainability both by those using the systems internally and externally (e.g. citing in to the information infrastructure), and those responsible for the systems themselves. The migration itself needed to ensure continuity of service and traceability of archived assets.
Resumo:
BACKGROUND: Succinate dehydrogenase inhibitor fungicides are important in the management of Zymoseptoria tritici in wheat. New active ingredients from this group of fungicides have been introduced recently and are widely used. Because the fungicides act at a single enzyme site, resistance development in Z. tritici is classified as medium-to-high risk. RESULTS: Isolates from Irish experimental plots in 2015 were tested against the SDHI penthiopyrad during routine monitoring. The median of the population was approximately 2 x less sensitive than the median of the baseline population. Two of the 93 isolates were much less sensitive to penthiopyrad than least sensitive of the baseline isolates. These isolates were also insensitive to most of commercially available SDHIs. Analysis of the succinate dehydrogenase coding genes confirmed the presence of the substitutions SdhC-H152R and SdhD-R47W in the very insensitive isolates. CONCLUSION: This is the first report showing that the SdhC-H152R mutation detected in laboratory mutagenesis studies also exists in the field. The function and relevance of this mutation, combined with SdhD-R47W, still needs to be determined.
Resumo:
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison of the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. These large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.
Resumo:
The potential impact of the abrupt 8.2 ka cold event on human demography, settlement patterns and culture in Europe and the Near East has emerged as a key theme in current discussion and debate. We test whether this event had an impact on the Mesolithic population of western Scotland, a case study located within the North Atlantic region where the environmental impact of the 8.2 ka event is likely to have been the most severe. By undertaking a Bayesian analysis of the radiocarbon record and using the number of activity events as a proxy for the size of the human population, we find evidence for a dramatic reduction in the Mesolithic population synchronous with the 8.2 ka event. We interpret this as reflecting the demographic collapse of a low density population that lacked the capability to adapt to the rapid onset of new environmental conditions. This impact of the 8.2 ka event in the North Atlantic region lends credence to the possibility of a similar impact on populations in Continental Europe and the Near East.
Resumo:
Background: Accurate dietary assessment is key to understanding nutrition-related outcomes and is essential for estimating dietary change in nutrition-based interventions. Objective: The objective of this study was to assess the pan-European reproducibility of the Food4Me food-frequency questionnaire (FFQ) in assessing the habitual diet of adults. Methods: Participantsfromthe Food4Me study, a 6-mo,Internet-based, randomizedcontrolled trial of personalized nutrition conducted in the United Kingdom, Ireland, Spain, Netherlands, Germany, Greece, and Poland were included. Screening and baseline data (both collected before commencement of the intervention) were used in the present analyses, and participants were includedonly iftheycompleted FFQs at screeningand at baselinewithin a 1-mo timeframebeforethe commencement oftheintervention. Sociodemographic (e.g., sex andcountry) andlifestyle[e.g.,bodymass index(BMI,inkg/m2)and physical activity] characteristics were collected. Linear regression, correlation coefficients, concordance (percentage) in quartile classification, and Bland-Altman plots for daily intakes were used to assess reproducibility. Results: In total, 567 participants (59% female), with a mean 6 SD age of 38.7 6 13.4 y and BMI of 25.4 6 4.8, completed bothFFQswithin 1 mo(mean 6 SD: 19.26 6.2d).Exact plus adjacent classification oftotal energy intakeinparticipants was highest in Ireland (94%) and lowest in Poland (81%). Spearman correlation coefficients (r) in total energy intake between FFQs ranged from 0.50 for obese participants to 0.68 and 0.60 in normal-weight and overweight participants, respectively. Bland-Altman plots showed a mean difference between FFQs of 210 kcal/d, with the agreement deteriorating as energy intakes increased. There was little variation in reproducibility of total energy intakes between sex and age groups. Conclusions: The online Food4Me FFQ was shown to be reproducible across 7 European countries when administered within a 1-mo period to a large number of participants. The results support the utility of the online Food4Me FFQ as a reproducible tool across multiple European populations. This trial was registered at clinicaltrials.gov as NCT01530139.
Resumo:
The self-assembly in aqueous solution of three novel telechelic conjugates comprising a central hydrophilic polymer and short (trimeric or pentameric) tyrosine end-caps has been investigated. Two of the conjugates have a central poly(oxyethylene) (polyethylene oxide, PEO) central block with different molar masses. The other conjugate has a central poly(l-alanine) (PAla) sequence in a purely amino-acid based conjugate. All three conjugates self-assemble into β-sheet based fibrillar structures, although the fibrillar morphology revealed by cryogenic-TEM is distinct for the three polymers—in particular the Tyr5-PEO6k-Tyr5 forms a population of short straight fibrils in contrast to the more diffuse fibril aggregates observed for Tyr5-PEO2k-Tyr5 and Tyr3-PAla-Tyr3. Hydrogel formation was not observed for these samples (in contrast to prior work on related systems) up to quite high concentrations, showing that it is possible to prepare solutions of peptide–polymer-peptide conjugates with hydrophobic end-caps without conformational constraints associated with hydrogelation. The Tyr5-PEO6k-Tyr5 shows significant PEO crystallization upon drying in contrast to the Tyr5-PEO2k-Tyr5 conjugate. Our findings point to the remarkable ability of short hydrophobic peptide end groups to modulate the self-assembly properties of polymers in solution in model peptide-capped “associative polymers”. Retention of fluidity at high conjugate concentration may be valuable in potential future applications of these conjugates as bioresponsive or biocompatible materials, for example exploiting the enzyme-responsiveness of the tyrosine end-groups
Resumo:
The vertical profile of aerosol is important for its radiative effects, but weakly constrained by observations on the global scale, and highly variable among different models. To investigate the controlling factors in one particular model, we investigate the effects of individual processes in HadGEM3–UKCA and compare the resulting diversity of aerosol vertical profiles with the inter-model diversity from the AeroCom Phase II control experiment. In this way we show that (in this model at least) the vertical profile is controlled by a relatively small number of processes, although these vary among aerosol components and particle sizes. We also show that sufficiently coarse variations in these processes can produce a similar diversity to that among different models in terms of the global-mean profile and, to a lesser extent, the zonal-mean vertical position. However, there are features of certain models' profiles that cannot be reproduced, suggesting the influence of further structural differences between models. In HadGEM3–UKCA, convective transport is found to be very important in controlling the vertical profile of all aerosol components by mass. In-cloud scavenging is very important for all except mineral dust. Growth by condensation is important for sulfate and carbonaceous aerosol (along with aqueous oxidation for the former and ageing by soluble material for the latter). The vertical extent of biomass-burning emissions into the free troposphere is also important for the profile of carbonaceous aerosol. Boundary-layer mixing plays a dominant role for sea salt and mineral dust, which are emitted only from the surface. Dry deposition and below-cloud scavenging are important for the profile of mineral dust only. In this model, the microphysical processes of nucleation, condensation and coagulation dominate the vertical profile of the smallest particles by number (e.g. total CN > 3 nm), while the profiles of larger particles (e.g. CN > 100 nm) are controlled by the same processes as the component mass profiles, plus the size distribution of primary emissions. We also show that the processes that affect the AOD-normalised radiative forcing in the model are predominantly those that affect the vertical mass distribution, in particular convective transport, in-cloud scavenging, aqueous oxidation, ageing and the vertical extent of biomass-burning emissions.
Resumo:
Sea-level rise (SLR) from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR flooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water flooding for London boroughs were quantified based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC) fifth assessment report (AR5) and UK climatic projections 2009 (UKCP09) using a tidally-adjusted bathtub modelling approach. Medium- to very high-resolution digital elevation models (DEMs) are used to evaluate inundation extents as well as uncertainties. Depending on the SLR scenario and DEMs used, it is estimated that 3%–8% of the area of Greater London could be inundated by 2100. The boroughs with the largest areas at risk of flooding are Newham, Southwark, and Greenwich. The differences in inundation areas estimated from a digital terrain model and a digital surface model are much greater than the root mean square error differences observed between the two data types, which may be attributed to processing levels. Flood models from SRTM data underestimate the inundation extent, so their results may not be reliable for constructing flood risk maps. This analysis provides a broad-scale estimate of the potential consequences of SLR and uncertainties in the DEM-based bathtub type flood inundation modelling for London boroughs.