933 resultados para Visualization Of Interval Methods
Resumo:
The objective of this study was to investigate the effect of drying conditions on the phenolic constituents and colour of extracts of organically grown white willow and meadowsweet for incorporation into a functional beverage with potential anti-inflammatory properties. The herbs were freeze-dried, air-dried, oven or tray-dried at 30 or 70 °C. The drying kinetics of the herbs was first determined. Both drying temperature and method had a significant effect (p ≤ 0.05) on the drying rate, the samples tray-dried had a faster drying rate than those oven-dried. Results show that for meadowsweet and willow, freeze-drying and oven or tray drying at 30 °C had no significant effect on the phenolic constituents (e.g. total phenols, salicylates, quercetin) or the colour of the extracts in comparison to traditional air-drying. Although increasing the drying temperature to 70 °C resulted in an increase in the drying rate of both herbs it also led to the loss of some phenolic compounds. Also, the extracts from both herbs dried at 70 °C were significantly (p ≤ 0.05) redder than the other drying methods. Therefore, tray drying these herbs at low temperatures may reduce drying time without having a significant effect on the phenolic content and colour of the extracts.
Resumo:
This paper presents practical approaches to the problem of sample size re-estimation in the case of clinical trials with survival data when proportional hazards can be assumed. When data are readily available at the time of the review, on a full range of survival experiences across the recruited patients, it is shown that, as expected, performing a blinded re-estimation procedure is straightforward and can help to maintain the trial's pre-specified error rates. Two alternative methods for dealing with the situation where limited survival experiences are available at the time of the sample size review are then presented and compared. In this instance, extrapolation is required in order to undertake the sample size re-estimation. Worked examples, together with results from a simulation study are described. It is concluded that, as in the standard case, use of either extrapolation approach successfully protects the trial error rates. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
A nitric oxide synthase (NOS)-like activity has been demonstrated in human red blood cells (RBCs), but doubts about its functional significance, isoform identity and disease relevance remain. Using flow cytometry in combination with the NO-imaging probe DAF-FM we find that all blood cells form NO intracellularly, with a rank order of monocytes > neutrophils > lymphocytes > RBCs > platelets. The observation of a NO-related fluorescence within RBCs was unexpected given the abundance of the NO-scavenger oxyhemoglobin. Constitutive normoxic NO formation was abolished by NOS inhibition and intracellular NO scavenging, confirmed by laser-scanning microscopy and unequivocally validated by detection of the DAF-FM reaction product with NO using HPLC and LC-MS/MS. Employing immunoprecipitation, ESI-MS/MS-based peptide sequencing and enzymatic assay we further demonstrate that human RBCs contain an endothelial NOS (eNOS) that converts L-3H-Arginine to L-3H-Citrulline in a Ca2+/Calmodulin-dependent fashion. Moreover, in patients with coronary artery disease, red cell eNOS expression and activity are both lower than in age-matched healthy individuals and correlate with the degree of endothelial dysfunction. Thus, human RBCs constitutively produce NO under normoxic conditions via an active eNOS isoform the activity of which is compromised in patients with coronary artery disease.
Resumo:
We describe ncWMS, an implementation of the Open Geospatial Consortium’s Web Map Service (WMS) specification for multidimensional gridded environmental data. ncWMS can read data in a large number of common scientific data formats – notably the NetCDF format with the Climate and Forecast conventions – then efficiently generate map imagery in thousands of different coordinate reference systems. It is designed to require minimal configuration from the system administrator and, when used in conjunction with a suitable client tool, provides end users with an interactive means for visualizing data without the need to download large files or interpret complex metadata. It is also used as a “bridging” tool providing interoperability between the environmental science community and users of geographic information systems. ncWMS implements a number of extensions to the WMS standard in order to fulfil some common scientific requirements, including the ability to generate plots representing timeseries and vertical sections. We discuss these extensions and their impact upon present and future interoperability. We discuss the conceptual mapping between the WMS data model and the data models used by gridded data formats, highlighting areas in which the mapping is incomplete or ambiguous. We discuss the architecture of the system and particular technical innovations of note, including the algorithms used for fast data reading and image generation. ncWMS has been widely adopted within the environmental data community and we discuss some of the ways in which the software is integrated within data infrastructures and portals.
Resumo:
The objective of this book is to present the quantitative techniques that are commonly employed in empirical finance research together with real world, state of the art research examples. Each chapter is written by international experts in their fields. The unique approach is to describe a question or issue in finance and then to demonstrate the methodologies that may be used to solve it. All of the techniques described are used to address real problems rather than being presented for their own sake and the areas of application have been carefully selected so that a broad range of methodological approaches can be covered. This book is aimed primarily at doctoral researchers and academics who are engaged in conducting original empirical research in finance. In addition, the book will be useful to researchers in the financial markets and also advanced Masters-level students who are writing dissertations.
Resumo:
Crystallization must occur in honey in order to produce set or creamed honey; however, the process must occur in a controlled manner in order to obtain an acceptable product. As a consequence, reliable methods are needed to measure the crystal content of honey (φ expressed as kg crystal per kg honey), which can also be implemented with relative ease in industrial production facilities. Unfortunately, suitable methods do not currently exist. This article reports on the development of 2 independent offline methods to measure the crystal content in honey based on differential scanning calorimetry and high-performance liquid chromatography. The 2 methods gave highly consistent results on the basis of paired t-test involving 143 experimental points (P > 0.05, r**2 = 0.99). The crystal content also correlated with the relative viscosity, defined as the ratio of the viscosity of crystal containing honey to that of the same honey when all crystals are dissolved, giving the following correlation: μr = 1 + 1398.8∅**2.318. This correlation can be used to estimate the crystal content of honey in industrial production facilities. The crystal growth rate at a temperature of 14 ◦C—the normal crystallization temperature used in practice—was linear, and the growth rate also increased with the total glucose content in the honey.
Resumo:
Three methods for intercalibrating humidity sounding channels are compared to assess their merits and demerits. The methods use the following: (1) natural targets (Antarctica and tropical oceans), (2) zonal average brightness temperatures, and (3) simultaneous nadir overpasses (SNOs). Advanced Microwave Sounding Unit-B instruments onboard the polar-orbiting NOAA 15 and NOAA 16 satellites are used as examples. Antarctica is shown to be useful for identifying some of the instrument problems but less promising for intercalibrating humidity sounders due to the large diurnal variations there. Owing to smaller diurnal cycles over tropical oceans, these are found to be a good target for estimating intersatellite biases. Estimated biases are more resistant to diurnal differences when data from ascending and descending passes are combined. Biases estimated from zonal-averaged brightness temperatures show large seasonal and latitude dependence which could have resulted from diurnal cycle aliasing and scene-radiance dependence of the biases. This method may not be the best for channels with significant surface contributions. We have also tested the impact of clouds on the estimated biases and found that it is not significant, at least for tropical ocean estimates. Biases estimated from SNOs are the least influenced by diurnal cycle aliasing and cloud impacts. However, SNOs cover only relatively small part of the dynamic range of observed brightness temperatures.
Resumo:
With the emerging prevalence of smart phones and 4G LTE networks, the demand for faster-better-cheaper mobile services anytime and anywhere is ever growing. The Dynamic Network Optimization (DNO) concept emerged as a solution that optimally and continuously tunes the network settings, in response to varying network conditions and subscriber needs. Yet, the DNO realization is still at infancy, largely hindered by the bottleneck of the lengthy optimization runtime. This paper presents the design and prototype of a novel cloud based parallel solution that further enhances the scalability of our prior work on various parallel solutions that accelerate network optimization algorithms. The solution aims to satisfy the high performance required by DNO, preliminarily on a sub-hourly basis. The paper subsequently visualizes a design and a full cycle of a DNO system. A set of potential solutions to large network and real-time DNO are also proposed. Overall, this work creates a breakthrough towards the realization of DNO.
Resumo:
Of the many sources of urban greenhouse gas (GHG) emissions, solid waste is the only one for which management decisions are undertaken primarily by municipal governments themselves and is hence often the largest component of cities’ corporate inventories. It is essential that decision-makers select an appropriate quantification methodology and have an appreciation of methodological strengths and shortcomings. This work compares four different waste emissions quantification methods, including Intergovernmental Panel on Climate Change (IPCC) 1996 guidelines, IPCC 2006 guidelines, U.S. Environmental Protection Agency (EPA) Waste Reduction Model (WARM), and the Federation of Canadian Municipalities- Partners for Climate Protection (FCM-PCP) quantification tool. Waste disposal data for the greater Toronto area (GTA) in 2005 are used for all methodologies; treatment options (including landfill, incineration, compost, and anaerobic digestion) are examined where available in methodologies. Landfill was shown to be the greatest source of GHG emissions, contributing more than three-quarters of total emissions associated with waste management. Results from the different landfill gas (LFG) quantification approaches ranged from an emissions source of 557 kt carbon dioxide equivalents (CO2e) (FCM-PCP) to a carbon sink of −53 kt CO2e (EPA WARM). Similar values were obtained between IPCC approaches. The IPCC 2006 method was found to be more appropriate for inventorying applications because it uses a waste-in-place (WIP) approach, rather than a methane commitment (MC) approach, despite perceived onerous data requirements for WIP. MC approaches were found to be useful from a planning standpoint; however, uncertainty associated with their projections of future parameter values limits their applicability for GHG inventorying. MC and WIP methods provided similar results in this case study; however, this is case specific because of similarity in assumptions of present and future landfill parameters and quantities of annual waste deposited in recent years being relatively consistent.
Resumo:
Capturing the sensory perception and preferences of older adults, whether healthy or with particular disease states, poses major methodological challenges for the sensory community. Currently a vastly under researched area, it is at the same time a vital area of research as alterations in sensory perception can affect daily dietary food choices, intake, health and wellbeing. Tailored sensory methods are needed that take into account the challenges of working with such populations including poor access leading to low patient numbers (study power), cognitive abilities, use of medications, clinical treatments and context (hospitals and care homes). The objective of this paper was to review current analytical and affective sensory methodologies used with different cohorts of healthy and frail older adults, with focus on food preference and liking. We particularly drew attention to studies concerning general ageing as well as to those considering age-related diseases that have an emphasis on malnutrition and weight loss. Pubmed and Web of Science databases were searched to 2014 for relevant articles in English. From this search 75 papers concerning sensory acuity, 41 regarding perceived intensity and 73 relating to hedonic measures were reviewed. Simpler testing methods, such as directional forced choice tests and paired preference tests need to be further explored to determine whether they lead to more reliable results and better inter-cohort comparisons. Finally, sensory quality and related quality of life for older adults suffering from dementia must be included and not ignored in our future actions.
Resumo:
This study analyzes the placement, services, and teaching methods of students who are deaf with additional disabilities. Through this analysis, these students are compared to students with multiple disabilities, not including deafness.
Resumo:
Purpose: The aim of this study was to evaluate the effect of three denture hygiene methods against different microbial biofilms formed on acrylic resin specimens. Materials and methods: The set (sterile stainless steel basket and specimens) was contaminated (37 degrees C for 48 hours) by a microbial inoculum with 106 colony-forming units (CFU)/ml (standard strains: Staphylococcus aureus, Streptococcus mutans, Escherichia coli, Candida albicans, Pseudomonas aeruginosa, and Enterococcus faecalis; field strains: S. mutans, C. albicans, C. glabrata, and C. tropicalis). After inoculation, specimens were cleansed by the following methods: (1) chemical: immersion in an alkaline peroxide solution (Bonyplus tablets) for 5 minutes; (2) mechanical: brushing with a dentifrice for removable prostheses (Dentu Creme) for 20 seconds; and (3) a combination of chemical and mechanical methods. Specimens were applied onto a Petri plate with appropriate culture medium for 10 minutes. Afterward, the specimens were removed and the plates incubated at 37 degrees C for 48 hours. Results: Chemical, mechanical, and combination methods showed no significant difference in the reduction of CFU for S. aureus, S. mutans (ATCC and field strain), and P. aeruginosa. Mechanical and combination methods were similar and more effective than the chemical method for E. faecalis, C. albicans (ATCC and field strain), and C. glabrata. The combination method was better than the chemical method for E. coli and C. tropicalis, and the mechanical method showed intermediate results. Conclusion: The three denture hygiene methods showed different effects depending on the type of microbial biofilms formed on acrylic base resin specimens.
Resumo:
Amaranth has attracted a great deal of interest in recent decades due to its valuable nutritional, functional, and agricultural characteristics. Amaranth seeds can be cooked, popped, roasted, flaked, or extruded for consumption. This study compared the in vitro starch digestibility of processed amaranth seeds to that of white bread. Raw seeds yielded rapidly digestible starch content (RDS) of 30.7% db and predicted glycemic index (pGI) of 87.2, the lowest among the studied products. Cooked, extruded, and popped amaranth seeds had starch digestibility similar to that of white bread (92.4, 91.2, and 101.3, respectively), while flaked and roasted seeds generated a slightly increased glycemic response (106.0 and 105.8, respectively). Cooking and extrusion did not alter the RDS contents of the seeds. No significant differences were observed among popped, flaked, and roasted RDS contents (38.0%,46.3%, and 42.9%, respectively), which were all lower than RDS content of bread (51.1%). Amaranth seed is a high glycemic food most likely because of its small starch granule size, low resistant starch content (< 1%), and tendency to completely lose its crystalline and granular starch structure during those heat treatments.