804 resultados para students that use drugs
Resumo:
The last decade has seen successful clinical application of polymer–protein conjugates (e.g. Oncaspar, Neulasta) and promising results in clinical trials with polymer–anticancer drug conjugates. This, together with the realisation that nanomedicines may play an important future role in cancer diagnosis and treatment, has increased interest in this emerging field. More than 10 anticancer conjugates have now entered clinical development. Phase I/II clinical trials involving N-(2-hydroxypropyl)methacrylamide (HPMA) copolymer-doxorubicin (PK1; FCE28068) showed a four- to fivefold reduction in anthracycline-related toxicity, and, despite cumulative doses up to 1680 mg/m2 (doxorubicin equivalent), no cardiotoxicity was observed. Antitumour activity in chemotherapy-resistant/refractory patients (including breast cancer) was also seen at doxorubicin doses of 80–320 mg/m2, consistent with tumour targeting by the enhanced permeability (EPR) effect. Hints, preclinical and clinical, that polymer anthracycline conjugation can bypass multidrug resistance (MDR) reinforce our hope that polymer drugs will prove useful in improving treatment of endocrine-related cancers. These promising early clinical results open the possibility of using the water-soluble polymers as platforms for delivery of a cocktail of pendant drugs. In particular, we have recently described the first conjugates to combine endocrine therapy and chemotherapy. Their markedly enhanced in vitro activity encourages further development of such novel, polymer-based combination therapies. This review briefly describes the current status of polymer therapeutics as anticancer agents, and discusses the opportunities for design of second-generation, polymer-based combination therapy, including the cocktail of agents that will be needed to treat resistant metastatic cancer.
Resumo:
In recent years, researchers and policy makers have recognized that nontimber forest products (NTFPs) extracted from forests by rural people can make a significant contribution to their well-being and to the local economy. This study presents and discusses data that describe the contribution of NTFPs to cash income in the dry deciduous forests of Orissa and Jharkhand, India. In its focus on cash income, this study sheds light on how the sale of NTFPs and products that use NTFPs as inputs contribute to the rural economy. From analysis of a unique data set that was collected over the course of a year, the study finds that the contribution of NTFPs to cash income varies across ecological settings, seasons, income level, and caste. Such variation should inform where and when to apply NTFP forest access and management policies.
Resumo:
In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.
Resumo:
The performance of real estate investment markets is difficult to monitor because the constituent assets are heterogeneous, are traded infrequently and do not trade through a central exchange in which prices can be observed. To address this, appraisal based indices have been developed that use the records of owners for whom buildings are regularly re-valued. These indices provide a practical solution to the measurement problem, but have been criticised for understating volatility and not capturing market turning points in a timely manner. This paper evaluates alternative ‘Transaction Linked Indices’ that are estimated using an extension of the hedonic method for index construction and with Investment Property Databank data. The two types of indices are compared over Q4 2001 to Q4 2012 in order to examine whether movements in these indices are consistent. The Transaction Linked Indices show stronger growth and sharper declines than their appraisal based counterparts over the course of the cycle in different European markets and they are typically two to four times more volatile. However, they have some limitations; for instance, only country level indicators can be published in many cases owing to low trading volumes in the period studied.
Resumo:
Background: Jargon aphasia is one of the most intractable forms of aphasia with limited recommendation on amelioration of associated naming difficulties and neologisms. The few naming therapy studies that exist in jargon aphasia have utilized either semantic or phonological approaches but the results have been equivocal. Moreover, the effect of therapy on characteristics of neologisms is less explored. Aims: This study investigates the effectiveness of a phonological naming therapy (i.e., phonological component analysis, PCA) on picture naming abilities and on quantitative and qualitative changes in neologisms for an individual with jargon aphasia (FF). Methods: FF showed evidence of jargon aphasia with severe naming difficulties and produced a very high proportion of neologisms. A single-subject multiple probe design across behaviors was employed to evaluate the effects of PCA therapy on the accuracy for three sets of words. In therapy, a phonological components analysis chart was used to identify five phonological components (i.e., rhymes, first sound, first sound associate, final sound, number of syllables) for each target word. Generalization effects—change in percent accuracy and error pattern—were examined comparing pre-and post-therapy responses on the Philadelphia Naming Test and these responses were analyzed to explore the characteristics of the neologisms. The quantitative change in neologisms was measured by change in the proportion of neologisms from pre- to post-therapy and the qualitative change was indexed by the phonological overlap between target and neologism. Results: As a consequence of PCA therapy, FF showed a significant improvement in his ability to name the treated items. His performance in maintenance and follow-up phases remained comparable to his performance during the therapy phases. Generalization to other naming tasks did not show a change in accuracy but distinct differences in error pattern (an increase in proportion of real word responses and a decrease in proportion of neologisms) were observed. Notably, the decrease in neologisms occurred with a corresponding trend for increase in the phonological similarity between the neologisms and the targets. Conclusions: This study demonstrated the effectiveness of a phonological therapy for improving naming abilities and reducing the amount of neologisms in an individual with severe jargon aphasia. The positive outcome of this research is encouraging, as it provides evidence for effective therapies for jargon aphasia and also emphasizes that use of the quality and quantity of errors may provide a sensitive outcome measure to determine therapy effectiveness, in particular for client groups who are difficult to treat.
Resumo:
A number of urban land-surface models have been developed in recent years to satisfy the growing requirements for urban weather and climate interactions and prediction. These models vary considerably in their complexity and the processes that they represent. Although the models have been evaluated, the observational datasets have typically been of short duration and so are not suitable to assess the performance over the seasonal cycle. The First International Urban Land-Surface Model comparison used an observational dataset that spanned a period greater than a year, which enables an analysis over the seasonal cycle, whilst the variety of models that took part in the comparison allows the analysis to include a full range of model complexity. The results show that, in general, urban models do capture the seasonal cycle for each of the surface fluxes, but have larger errors in the summer months than in the winter. The net all-wave radiation has the smallest errors at all times of the year but with a negative bias. The latent heat flux and the net storage heat flux are also underestimated, whereas the sensible heat flux generally has a positive bias throughout the seasonal cycle. A representation of vegetation is a necessary, but not sufficient, condition for modelling the latent heat flux and associated sensible heat flux at all times of the year. Models that include a temporal variation in anthropogenic heat flux show some increased skill in the sensible heat flux at night during the winter, although their daytime values are consistently overestimated at all times of the year. Models that use the net all-wave radiation to determine the net storage heat flux have the best agreement with observed values of this flux during the daytime in summer, but perform worse during the winter months. The latter could result from a bias of summer periods in the observational datasets used to derive the relations with net all-wave radiation. Apart from these models, all of the other model categories considered in the analysis result in a mean net storage heat flux that is close to zero throughout the seasonal cycle, which is not seen in the observations. Models with a simple treatment of the physical processes generally perform at least as well as models with greater complexity.
Resumo:
Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1). Second, all models show a strong positive response to increased atmospheric CO2 concentrations (857 ppm) in both CH4 emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH4 fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH4 fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH4 emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH4 emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH4 emission models, even after uncertainties in wetland areas are accounted for.
Resumo:
Urbanization related alterations to the surface energy balance impact urban warming (‘heat islands’), the growth of the boundary layer, and many other biophysical processes. Traditionally, in situ heat flux measures have been used to quantify such processes, but these typically represent only a small local-scale area within the heterogeneous urban environment. For this reason, remote sensing approaches are very attractive for elucidating more spatially representative information. Here we use hyperspectral imagery from a new airborne sensor, the Operative Modular Imaging Spectrometer (OMIS), along with a survey map and meteorological data, to derive the land cover information and surface parameters required to map spatial variations in turbulent sensible heat flux (QH). The results from two spatially-explicit flux retrieval methods which use contrasting approaches and, to a large degree, different input data are compared for a central urban area of Shanghai, China: (1) the Local-scale Urban Meteorological Parameterization Scheme (LUMPS) and (2) an Aerodynamic Resistance Method (ARM). Sensible heat fluxes are determined at the full 6 m spatial resolution of the OMIS sensor, and at lower resolutions via pixel aggregation and spatial averaging. At the 6 m spatial resolution, the sensible heat flux of rooftop dominated pixels exceeds that of roads, water and vegetated areas, with values peaking at ∼ 350 W m− 2, whilst the storage heat flux is greatest for road dominated pixels (peaking at around 420 W m− 2). We investigate the use of both OMIS-derived land surface temperatures made using a Temperature–Emissivity Separation (TES) approach, and land surface temperatures estimated from air temperature measures. Sensible heat flux differences from the two approaches over the entire 2 × 2 km study area are less than 30 W m− 2, suggesting that methods employing either strategy maybe practica1 when operated using low spatial resolution (e.g. 1 km) data. Due to the differing methodologies, direct comparisons between results obtained with the LUMPS and ARM methods are most sensibly made at reduced spatial scales. At 30 m spatial resolution, both approaches produce similar results, with the smallest difference being less than 15 W m− 2 in mean QH averaged over the entire study area. This is encouraging given the differing architecture and data requirements of the LUMPS and ARM methods. Furthermore, in terms of mean study QH, the results obtained by averaging the original 6 m spatial resolution LUMPS-derived QH values to 30 and 90 m spatial resolution are within ∼ 5 W m− 2 of those derived from averaging the original surface parameter maps prior to input into LUMPS, suggesting that that use of much lower spatial resolution spaceborne imagery data, for example from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is likely to be a practical solution for heat flux determination in urban areas.
Resumo:
In the last two decades substantial advances have been made in the understanding of the scientific basis of urban climates. These are reviewed here with attention to sustainability of cities, applications that use climate information, and scientific understanding in relation to measurements and modelling. Consideration is given from street (micro) scale to neighbourhood (local) to city and region (meso) scale. Those areas where improvements are needed in the next decade to ensure more sustainable cities are identified. High-priority recommendations are made in the following six strategic areas: observations, data, understanding, modelling, tools and education. These include the need for more operational urban measurement stations and networks; for an international data archive to aid translation of research findings into design tools, along with guidelines for different climate zones and land uses; to develop methods to analyse atmospheric data measured above complex urban surfaces; to improve short-range, high-resolution numerical prediction of weather, air quality and chemical dispersion through improved modelling of the biogeophysical features of the urban land surface; to improve education about urban meteorology; and to encourage communication across scientific disciplines at a range of spatial and temporal scales.
Resumo:
Simultaneous scintillometer measurements at multiple wavelengths (pairing visible or infrared with millimetre or radio waves) have the potential to provide estimates of path-averaged surface fluxes of sensible and latent heat. Traditionally, the equations to deduce fluxes from measurements of the refractive index structure parameter at the two wavelengths have been formulated in terms of absolute humidity. Here, it is shown that formulation in terms of specific humidity has several advantages. Specific humidity satisfies the requirement for a conserved variable in similarity theory and inherently accounts for density effects misapportioned through the use of absolute humidity. The validity and interpretation of both formulations are assessed and the analogy with open-path infrared gas analyser density corrections is discussed. Original derivations using absolute humidity to represent the influence of water vapour are shown to misrepresent the latent heat flux. The errors in the flux, which depend on the Bowen ratio (larger for drier conditions), may be of the order of 10%. The sensible heat flux is shown to remain unchanged. It is also verified that use of a single scintillometer at optical wavelengths is essentially unaffected by these new formulations. Where it may not be possible to reprocess two-wavelength results, a density correction to the latent heat flux is proposed for scintillometry, which can be applied retrospectively to reduce the error.
Resumo:
Over the last decade, due to the Gravity Recovery And Climate Experiment (GRACE) mission and, more recently, the Gravity and steady state Ocean Circulation Explorer (GOCE) mission, our ability to measure the ocean’s mean dynamic topography (MDT) from space has improved dramatically. Here we use GOCE to measure surface current speeds in the North Atlantic and compare our results with a range of independent estimates that use drifter data to improve small scales. We find that, with filtering, GOCE can recover 70% of the Gulf Steam strength relative to the best drifter-based estimates. In the subpolar gyre the boundary currents obtained from GOCE are close to the drifter-based estimates. Crucial to this result is careful filtering which is required to remove small-scale errors, or noise, in the computed surface. We show that our heuristic noise metric, used to determine the degree of filtering, compares well with the quadratic sum of mean sea surface and formal geoid errors obtained from the error variance–covariance matrix associated with the GOCE gravity model. At a resolution of 100 km the North Atlantic mean GOCE MDT error before filtering is 5 cm with almost all of this coming from the GOCE gravity model.
Resumo:
Recently, in order to accelerate drug development, trials that use adaptive seamless designs such as phase II/III clinical trials have been proposed. Phase II/III clinical trials combine traditional phases II and III into a single trial that is conducted in two stages. Using stage 1 data, an interim analysis is performed to answer phase II objectives and after collection of stage 2 data, a final confirmatory analysis is performed to answer phase III objectives. In this paper we consider phase II/III clinical trials in which, at stage 1, several experimental treatments are compared to a control and the apparently most effective experimental treatment is selected to continue to stage 2. Although these trials are attractive because the confirmatory analysis includes phase II data from stage 1, the inference methods used for trials that compare a single experimental treatment to a control and do not have an interim analysis are no longer appropriate. Several methods for analysing phase II/III clinical trials have been developed. These methods are recent and so there is little literature on extensive comparisons of their characteristics. In this paper we review and compare the various methods available for constructing confidence intervals after phase II/III clinical trials.
Resumo:
he first international urban land surface model comparison was designed to identify three aspects of the urban surface-atmosphere interactions: (1) the dominant physical processes, (2) the level of complexity required to model these, and 3) the parameter requirements for such a model. Offline simulations from 32 land surface schemes, with varying complexity, contributed to the comparison. Model results were analysed within a framework of physical classifications and over four stages. The results show that the following are important urban processes; (i) multiple reflections of shortwave radiation within street canyons, (ii) reduction in the amount of visible sky from within the canyon, which impacts on the net long-wave radiation, iii) the contrast in surface temperatures between building roofs and street canyons, and (iv) evaporation from vegetation. Models that use an appropriate bulk albedo based on multiple solar reflections, represent building roof surfaces separately from street canyons and include a representation of vegetation demonstrate more skill, but require parameter information on the albedo, height of the buildings relative to the width of the streets (height to width ratio), the fraction of building roofs compared to street canyons from a plan view (plan area fraction) and the fraction of the surface that is vegetated. These results, whilst based on a single site and less than 18 months of data, have implications for the future design of urban land surface models, the data that need to be measured in urban observational campaigns, and what needs to be included in initiatives for regional and global parameter databases.
Resumo:
Population modelling is increasingly recognised as a useful tool for pesticide risk assessment. For vertebrates that may ingest pesticides with their food, such as woodpigeon (Columba palumbus), population models that simulate foraging behaviour explicitly can help predicting both exposure and population-level impact. Optimal foraging theory is often assumed to explain the individual-level decisions driving distributions of individuals in the field, but it may not adequately predict spatial and temporal characteristics of woodpigeon foraging because of the woodpigeons’ excellent memory, ability to fly long distances, and distinctive flocking behaviour. Here we present an individual-based model (IBM) of the woodpigeon. We used the model to predict distributions of foraging woodpigeons that use one of six alternative foraging strategies: optimal foraging, memory-based foraging and random foraging, each with or without flocking mechanisms. We used pattern-oriented modelling to determine which of the foraging strategies is best able to reproduce observed data patterns. Data used for model evaluation were gathered during a long-term woodpigeon study conducted between 1961 and 2004 and a radiotracking study conducted in 2003 and 2004, both in the UK, and are summarised here as three complex patterns: the distributions of foraging birds between vegetation types during the year, the number of fields visited daily by individuals, and the proportion of fields revisited by them on subsequent days. The model with a memory-based foraging strategy and a flocking mechanism was the only one to reproduce these three data patterns, and the optimal foraging model produced poor matches to all of them. The random foraging strategy reproduced two of the three patterns but was not able to guarantee population persistence. We conclude that with the memory-based foraging strategy including a flocking mechanism our model is realistic enough to estimate the potential exposure of woodpigeons to pesticides. We discuss how exposure can be linked to our model, and how the model could be used for risk assessment of pesticides, for example predicting exposure and effects in heterogeneous landscapes planted seasonally with a variety of crops, while accounting for differences in land use between landscapes.
Resumo:
The aerosol direct radiative effect (DRE) of African smoke was analyzed in cloud scenes over the southeast Atlantic Ocean, using Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) satellite observations and Hadley Centre Global Environmental Model version 2 (HadGEM2) climate model simulations. The observed mean DRE was about 30–35 W m−2 in August and September 2006–2009. In some years, short episodes of high-aerosol DRE can be observed, due to high-aerosol loadings, while in other years the loadings are lower but more prolonged. Climate models that use evenly distributed monthly averaged emission fields will not reproduce these high-aerosol loadings. Furthermore, the simulated monthly mean aerosol DRE in HadGEM2 is only about 6 W m−2 in August. The difference with SCIAMACHY mean observations can be partly explained by an underestimation of the aerosol absorption Ångström exponent in the ultraviolet. However, the subsequent increase of aerosol DRE simulation by about 20% is not enough to explain the observed discrepancy between simulations and observations.