43 resultados para pacs: equipment and software evaluation methods
em CentAUR: Central Archive University of Reading - UK
Resumo:
Sea-ice concentrations in the Laptev Sea simulated by the coupled North Atlantic-Arctic Ocean-Sea-Ice Model and Finite Element Sea-Ice Ocean Model are evaluated using sea-ice concentrations from Advanced Microwave Scanning Radiometer-Earth Observing System satellite data and a polynya classification method for winter 2007/08. While developed to simulate largescale sea-ice conditions, both models are analysed here in terms of polynya simulation. The main modification of both models in this study is the implementation of a landfast-ice mask. Simulated sea-ice fields from different model runs are compared with emphasis placed on the impact of this prescribed landfast-ice mask. We demonstrate that sea-ice models are not able to simulate flaw polynyas realistically when used without fast-ice description. Our investigations indicate that without landfast ice and with coarse horizontal resolution the models overestimate the fraction of open water in the polynya. This is not because a realistic polynya appears but due to a larger-scale reduction of ice concentrations and smoothed ice-concentration fields. After implementation of a landfast-ice mask, the polynya location is realistically simulated but the total open-water area is still overestimated in most cases. The study shows that the fast-ice parameterization is essential for model improvements. However, further improvements are necessary in order to progress from the simulation of large-scale features in the Arctic towards a more detailed simulation of smaller-scaled features (here polynyas) in an Arctic shelf sea.
Resumo:
Using 6-benzo[1,3]dioxolefulvene (1a), a series of benzodioxole substituted titanocenes was synthesized. The benzyl-substituted titanocene bis[(benzo[1,3]dioxole)-5-methylcyclopentadienyl] titanium (IV) dichloride (2a) was synthesized from the reaction of Super Hydride with 1a. An X-ray determined crystal structure was obtained for 2a. The ansa-titanocene (1,2-di(cyclopentadienyl)1,2-di-(benzo[1,3]dioxole)-ethanediyl) titanium(IV) dichloride (2b) was synthesized by reductive dimerisation of la with titanium dichloride. The diarylmethyl substituted titanocene bis(di(benzo[1,3]dioxole)-S-methylcyclopentadienyl) titanium(IV) dichloride (20 was synthesized by reacting la with the para-lithiated benzodioxole followed by transmetallation with titanium tetrachloride. When titanocenes 2a-c were tested against pig kidney (LLC-PK) cells inhibitory concentrations (IC50) of 2.8 X 10(-4), 1.6 x 10(-4) and 7.6 x 10(-5) m, respectively, were observed. These values represent improved cytotoxicity against LLC-PK, when compared with unsubstituted titanocene dichloride, but are not as impressive as values obtained for titanocenes previously synthesized using the above methods. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
There are a number of challenges associated with managing knowledge and information in construction organizations delivering major capital assets. These include the ever-increasing volumes of information, losing people because of retirement or competitors, the continuously changing nature of information, lack of methods on eliciting useful knowledge, development of new information technologies and changes in management and innovation practices. Existing tools and methodologies for valuing intangible assets in fields such as engineering, project management and financial, accounting, do not address fully the issues associated with the valuation of information and knowledge. Information is rarely recorded in a way that a document can be valued, when either produced or subsequently retrieved and re-used. In addition there is a wealth of tacit personal knowledge which, if codified into documentary information, may prove to be very valuable to operators of the finished asset or future designers. This paper addresses the problem of information overload and identifies the differences between data, information and knowledge. An exploratory study was conducted with a leading construction consultant examining three perspectives (business, project management and document management) by structured interviews and specifically how to value information in practical terms. Major challenges in information management are identified. An through-life Information Evaluation methodology (IEM) is presented to reduce information overload and to make the information more valuable in the future.
Resumo:
A precipitation downscaling method is presented using precipitation from a general circulation model (GCM) as predictor. The method extends a previous method from monthly to daily temporal resolution. The simplest form of the method corrects for biases in wet-day frequency and intensity. A more sophisticated variant also takes account of flow-dependent biases in the GCM. The method is flexible and simple to implement. It is proposed here as a correction of GCM output for applications where sophisticated methods are not available, or as a benchmark for the evaluation of other downscaling methods. Applied to output from reanalyses (ECMWF, NCEP) in the region of the European Alps, the method is capable of reducing large biases in the precipitation frequency distribution, even for high quantiles. The two variants exhibit similar performances, but the ideal choice of method can depend on the GCM/reanalysis and it is recommended to test the methods in each case. Limitations of the method are found in small areas with unresolved topographic detail that influence higher-order statistics (e.g. high quantiles). When used as benchmark for three regional climate models (RCMs), the corrected reanalysis and the RCMs perform similarly in many regions, but the added value of the latter is evident for high quantiles in some small regions.
Resumo:
Objective To undertake a process evaluation of pharmacists' recommendations arising in the context of a complex IT-enabled pharmacist-delivered randomised controlled trial (PINCER trial) to reduce the risk of hazardous medicines management in general practices. Methods PINCER pharmacists manually recorded patients’ demographics, details of interventions recommended, actions undertaken by practice staff and time taken to manage individual cases of hazardous medicines management. Data were coded and double entered into SPSS v15, and then summarised using percentages for categorical data (with 95% CI) and, as appropriate, means (SD) or medians (IQR) for continuous data. Key findings Pharmacists spent a median of 20 minutes (IQR 10, 30) reviewing medical records, recommending interventions and completing actions in each case of hazardous medicines management. Pharmacists judged 72% (95%CI 70, 74) (1463/2026) of cases of hazardous medicines management to be clinically relevant. Pharmacists recommended 2105 interventions in 74% (95%CI 73, 76) (1516/2038) of cases and 1685 actions were taken in 61% (95%CI 59, 63) (1246/2038) of cases; 66% (95%CI 64, 68) (1383/2105) of interventions recommended by pharmacists were completed and 5% (95%CI 4, 6) (104/2105) of recommendations were accepted by general practitioners (GPs), but not completed at the end of the pharmacists’ placement; the remaining recommendations were rejected or considered not relevant by GPs. Conclusions The outcome measures were used to target pharmacist activity in general practice towards patients at risk from hazardous medicines management. Recommendations from trained PINCER pharmacists were found to be broadly acceptable to GPs and led to ameliorative action in the majority of cases. It seems likely that the approach used by the PINCER pharmacists could be employed by other practice pharmacists following appropriate training.
Resumo:
The functional food market is growing rapidly and membrane processing offers several advantages over conventional methods for separation, fractionation and recovery of bioactive components. The aim of the present study was to select a process that could be implemented easily on an industrial scale for the isolation of natural lactose-derived oligosaccharides (OS) from caprine whey, enabling the development of functional foods for clinical and infant nutrition. The most efficient process was the combination of a pre-treatment to eliminate proteins and fat, using an ultrafiltration (UF) membrane of 25 kDa molecular weight cut off (MWCO), followed by a tighter UF membrane with 1 kDa MWCO. Circa 90% of the carbohydrates recovered in the final retentate were OS. Capillary electrophoresis was used to evaluate the OS profile in this retentate. The combined membrane-processing system is thus a promising technique for obtaining natural concentrated OS from whey. Powered
Resumo:
Purpose This research explored the use of developmental evaluation methods with community of practice programmes experiencing change or transition to better understand how to target support resources. Design / methodology / approach The practical use of a number of developmental evaluation methods was explored in three organisations over a nine month period using an action research design. The research was a collaborative process involving all the company participants and the academic (the author) with the intention of developing the practices of the participants as well as contributing to scholarship. Findings The developmental evaluation activities achieved the objectives of the knowledge managers concerned: they developed a better understanding of the contribution and performance of their communities of practice, allowing support resources to be better targeted. Three methods (fundamental evaluative thinking, actual-ideal comparative method and focus on strengths and assets) were found to be useful. Cross-case analysis led to the proposition that developmental evaluation methods act as a structural mechanism that develops the discourse of the organisation in ways that enhance the climate for learning, potentially helping develop a learning organization. Practical implications Developmental evaluation methods add to the options available to evaluate community of practice programmes. These supplement the commonly used activity indicators and impact story methods. 2 Originality / value Developmental evaluation methods are often used in social change initiatives, informing public policy and funding decisions. The contribution here is to extend their use to organisational community of practice programmes.
Resumo:
The representation of the diurnal cycle in the Hadley Centre climate model is evaluated using simulations of the infrared radiances observed by Meteosat 7. In both the window and water vapour channels, the standard version of the model with 19 levels produces a good simulation of the geographical distributions of the mean radiances and of the amplitude of the diurnal cycle. Increasing the vertical resolution to 30 levels leads to further improvements in the mean fields. The timing of the maximum and minimum radiances reveals significant model errors, however, which are sensitive to the frequency with which the radiation scheme is called. In most regions, these errors are consistent with well documented errors in the timing of convective precipitation, which peaks before noon in the model, in contrast to the observed peak in the late afternoon or evening. When the radiation scheme is called every model time step (half an hour), as opposed to every three hours in the standard version, the timing of the minimum radiance is improved for convective regions over central Africa, due to the creation of upper-level layer-cloud by detrainment from the convection scheme, which persists well after the convection itself has dissipated. However, this produces a decoupling between the timing of the diurnal cycles of precipitation and window channel radiance. The possibility is raised that a similar decoupling may occur in reality and the implications of this for the retrieval of the diurnal cycle of precipitation from infrared radiances are discussed.
Resumo:
This article describes the development and evaluation of the U.K.’s new High-Resolution Global Environmental Model (HiGEM), which is based on the latest climate configuration of the Met Office Unified Model, known as the Hadley Centre Global Environmental Model, version 1 (HadGEM1). In HiGEM, the horizontal resolution has been increased to 0.83° latitude × 1.25° longitude for the atmosphere, and 1/3° × 1/3° globally for the ocean. Multidecadal integrations of HiGEM, and the lower-resolution HadGEM, are used to explore the impact of resolution on the fidelity of climate simulations. Generally, SST errors are reduced in HiGEM. Cold SST errors associated with the path of the North Atlantic drift improve, and warm SST errors are reduced in upwelling stratocumulus regions where the simulation of low-level cloud is better at higher resolution. The ocean model in HiGEM allows ocean eddies to be partially resolved, which dramatically improves the representation of sea surface height variability. In the Southern Ocean, most of the heat transports in HiGEM is achieved by resolved eddy motions, which replaces the parameterized eddy heat transport in the lower-resolution model. HiGEM is also able to more realistically simulate small-scale features in the wind stress curl around islands and oceanic SST fronts, which may have implications for oceanic upwelling and ocean biology. Higher resolution in both the atmosphere and the ocean allows coupling to occur on small spatial scales. In particular, the small-scale interaction recently seen in satellite imagery between the atmosphere and tropical instability waves in the tropical Pacific Ocean is realistically captured in HiGEM. Tropical instability waves play a role in improving the simulation of the mean state of the tropical Pacific, which has important implications for climate variability. In particular, all aspects of the simulation of ENSO (spatial patterns, the time scales at which ENSO occurs, and global teleconnections) are much improved in HiGEM.
Resumo:
This paper analyses historic records of agricultural land use and management for England and Wales from 1931 and 1991 and uses export coefficient modelling to hindcast the impact of these practices on the rates of diffuse nitrogen (N) and phosphorus (P) export to water bodies for each of the major geo-climatic regions of England and Wales. Key trends indicate the importance of animal agriculture as a contributor to the total diffuse agricultural nutrient loading on waters, and the need to bring these sources under control if conditions suitable for sustaining 'Good Ecological Status' under the Water Framework Directive are to be generated. The analysis highlights the importance of measuring changes in nutrient loading in relation to the catchment-specific baseline state for different water bodies. The approach is also used to forecast the likely impact of broad regional scale scenarios on nutrient export to waters and highlights the need to take sensitive land out of production, introduce ceilings on fertilizer use and stocking densities, and controls on agricultural practice in higher risk areas where intensive agriculture is combined with a low intrinsic nutrient retention capacity, although the uncertainties associated with the modelling applied at this scale should be taken into account in the interpretation of model output. The paper advocates the need for a two-tiered approach to nutrient management, combining broad regional policies with targeted management in high risk areas at the catchment and farm scale.
Resumo:
The Iowa gambling task (IGT) is one of the most influential behavioral paradigms in reward-related decision making and has been, most notably, associated with ventromedial prefrontal cortex function. However, performance in the IGT relies on a complex set of cognitive subprocesses, in particular integrating information about the outcome of choices into a continuously updated decision strategy under ambiguous conditions. The complexity of the task has made it difficult for neuroimaging studies to disentangle the underlying neurocognitive processes. In this study, we used functional magnetic resonance imaging in combination with a novel adaptation of the task, which allowed us to examine separately activation associated with the moment of decision or the evaluation of decision outcomes. Importantly, using whole-brain regression analyses with individual performance, in combination with the choice/outcome history of individual subjects, we aimed to identify the neural overlap between areas that are involved in the evaluation of outcomes and in the progressive discrimination of the relative value of available choice options, thus mapping the two fundamental cognitive processes that lead to adaptive decision making. We show that activation in right ventromedial and dorsolateral prefrontal cortex was predictive of adaptive performance, in both discriminating disadvantageous from advantageous decisions and confirming negative decision outcomes. We propose that these two prefrontal areas mediate shifting away from disadvantageous choices through their sensitivity to accumulating negative outcomes. These findings provide functional evidence of the underlying processes by which these prefrontal subregions drive adaptive choice in the task, namely through contingency-sensitive outcome evaluation.
Resumo:
In this article, we use the no-response test idea, introduced in Luke and Potthast (2003) and Potthast (Preprint) and the inverse obstacle problem, to identify the interface of the discontinuity of the coefficient gamma of the equation del (.) gamma(x)del + c(x) with piecewise regular gamma and bounded function c(x). We use infinitely many Cauchy data as measurement and give a reconstructive method to localize the interface. We will base this multiwave version of the no-response test on two different proofs. The first one contains a pointwise estimate as used by the singular sources method. The second one is built on an energy (or an integral) estimate which is the basis of the probe method. As a conclusion of this, the probe and the singular sources methods are equivalent regarding their convergence and the no-response test can be seen as a unified framework for these methods. As a further contribution, we provide a formula to reconstruct the values of the jump of gamma(x), x is an element of partial derivative D at the boundary. A second consequence of this formula is that the blow-up rate of the indicator functions of the probe and singular sources methods at the interface is given by the order of the singularity of the fundamental solution.
Resumo:
In this paper we consider the scattering of a plane acoustic or electromagnetic wave by a one-dimensional, periodic rough surface. We restrict the discussion to the case when the boundary is sound soft in the acoustic case, perfectly reflecting with TE polarization in the EM case, so that the total field vanishes on the boundary. We propose a uniquely solvable first kind integral equation formulation of the problem, which amounts to a requirement that the normal derivative of the Green's representation formula for the total field vanish on a horizontal line below the scattering surface. We then discuss the numerical solution by Galerkin's method of this (ill-posed) integral equation. We point out that, with two particular choices of the trial and test spaces, we recover the so-called SC (spectral-coordinate) and SS (spectral-spectral) numerical schemes of DeSanto et al., Waves Random Media, 8, 315-414 1998. We next propose a new Galerkin scheme, a modification of the SS method that we term the SS* method, which is an instance of the well-known dual least squares Galerkin method. We show that the SS* method is always well-defined and is optimally convergent as the size of the approximation space increases. Moreover, we make a connection with the classical least squares method, in which the coefficients in the Rayleigh expansion of the solution are determined by enforcing the boundary condition in a least squares sense, pointing out that the linear system to be solved in the SS* method is identical to that in the least squares method. Using this connection we show that (reflecting the ill-posed nature of the integral equation solved) the condition number of the linear system in the SS* and least squares methods approaches infinity as the approximation space increases in size. We also provide theoretical error bounds on the condition number and on the errors induced in the numerical solution computed as a result of ill-conditioning. Numerical results confirm the convergence of the SS* method and illustrate the ill-conditioning that arises.
Resumo:
A total of 133 samples (53 fermented unprocessed, 19 fermented processed. 62 urea-treated processed) of whole crop wheat (WCW) and 16 samples (five fermented unprocessed, six fermented processed, five urea-treated processed) of whole crop barley (WCB) were collected from commercial farms over two consecutive years (2003/2004 and 2004/2005). Disruption of the maize grains to increase starch availability was achieved at the point of harvest by processors fitted to the forage harvesters. All samples were subjected to laboratory analysis whilst 50 of the samples (24 front Year 1, 26 front Year 2 all WCW except four WCB in Year 2) were subjected to in vivo digestibility and energy value measurements using mature wether sheep. Urea-treated WCW had higher (P<0.05) pH, and dry matter (DM) and crude protein contents and lower concentrations of fermentation products than fermented WCW. Starch was generally lower in fermented, unprocessed WCW and no effect of crop maturity at harvest (as indicated by DM content) on starch concentrations was seen. Urea-treated WCW had higher (P<0.05) in vivo digestible organic matter contents in the DM (DOMD) in Year 1 although this was not recorded in Year 2. There was a close relationship between the digestibility values of organic matter and gross energy thus aiding the use of DOMD to predict metabolisable energy (ME) content. A wide range of ME values was observed (WCW. 8.7-11.8 MJ/kg DM; WCB 7.9-11.2 MJ/kg DM) with the overall ME/DOMD ratio (ME = 0.0156 DOMD) in line With Studies in other forages. There was no evidence that a separate ME/DOMD relationship was needed for WCB which is helpful for practical application. This ratio and other parameters were affected by year of harvest (P<0.05) highlighting the influence of environmental and Other undefined factors. The variability in the composition and nutritive value of WCW and WCB highlights the need for reliable and accurate evaluation methods to be available to assess the Value of these forages before they are included in diets for dairy cows. (C) 2008 Elsevier B.V. All rights reserved.