933 resultados para Visualization Of Interval Methods
Resumo:
Using generalized collocation techniques based on fitting functions that are trigonometric (rather than algebraic as in classical integrators), we develop a new class of multistage, one-step, variable stepsize, and variable coefficients implicit Runge-Kutta methods to solve oscillatory ODE problems. The coefficients of the methods are functions of the frequency and the stepsize. We refer to this class as trigonometric implicit Runge-Kutta (TIRK) methods. They integrate an equation exactly if its solution is a trigonometric polynomial with a known frequency. We characterize the order and A-stability of the methods and establish results similar to that of classical algebraic collocation RK methods. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Background: Determination of the subcellular location of a protein is essential to understanding its biochemical function. This information can provide insight into the function of hypothetical or novel proteins. These data are difficult to obtain experimentally but have become especially important since many whole genome sequencing projects have been finished and many resulting protein sequences are still lacking detailed functional information. In order to address this paucity of data, many computational prediction methods have been developed. However, these methods have varying levels of accuracy and perform differently based on the sequences that are presented to the underlying algorithm. It is therefore useful to compare these methods and monitor their performance. Results: In order to perform a comprehensive survey of prediction methods, we selected only methods that accepted large batches of protein sequences, were publicly available, and were able to predict localization to at least nine of the major subcellular locations (nucleus, cytosol, mitochondrion, extracellular region, plasma membrane, Golgi apparatus, endoplasmic reticulum (ER), peroxisome, and lysosome). The selected methods were CELLO, MultiLoc, Proteome Analyst, pTarget and WoLF PSORT. These methods were evaluated using 3763 mouse proteins from SwissProt that represent the source of the training sets used in development of the individual methods. In addition, an independent evaluation set of 2145 mouse proteins from LOCATE with a bias towards the subcellular localization underrepresented in SwissProt was used. The sensitivity and specificity were calculated for each method and compared to a theoretical value based on what might be observed by random chance. Conclusion: No individual method had a sufficient level of sensitivity across both evaluation sets that would enable reliable application to hypothetical proteins. All methods showed lower performance on the LOCATE dataset and variable performance on individual subcellular localizations was observed. Proteins localized to the secretory pathway were the most difficult to predict, while nuclear and extracellular proteins were predicted with the highest sensitivity.
Resumo:
This article reviews the statistical methods that have been used to study the planar distribution, and especially clustering, of objects in histological sections of brain tissue. The objective of these studies is usually quantitative description, comparison between patients or correlation between histological features. Objects of interest such as neurones, glial cells, blood vessels or pathological features such as protein deposits appear as sectional profiles in a two-dimensional section. These objects may not be randomly distributed within the section but exhibit a spatial pattern, a departure from randomness either towards regularity or clustering. The methods described include simple tests of whether the planar distribution of a histological feature departs significantly from randomness using randomized points, lines or sample fields and more complex methods that employ grids or transects of contiguous fields, and which can detect the intensity of aggregation and the sizes, distribution and spacing of clusters. The usefulness of these methods in understanding the pathogenesis of neurodegenerative diseases such as Alzheimer's disease and Creutzfeldt-Jakob disease is discussed. © 2006 The Royal Microscopical Society.
Resumo:
Purpose - The purpose of this paper is to examine consumer emotions and the social science and observation measures that can be utilised to capture the emotional experiences of consumers. The paper is not setting out to solve the theoretical debate surrounding emotion research, rather to provide an assessment of methodological options available to researchers to aid their investigation into both the structure and content of the consumer emotional experience, acknowledging both the conscious and subconscious elements of that experience. Design/methodology/approach - A review of a wide range of prior research from the fields of marketing, consumer behaviour, psychology and neuroscience are examined to identify the different observation methods available to marketing researchers in the study of consumer emotion. This review also considers the self report measures available to researchers and identifies the main theoretical debates concerning emotion to provide a comprehensive overview of the issues surrounding the capture of emotional responses in a marketing context and to highlight the benefits that observation methods offer this area of research. Findings - This paper evaluates three observation methods and four widely used self report measures of emotion used in a marketing context. Whilst it is recognised that marketers have shown preference for the use of self report measures in prior research, mainly due to ease of implementation, it is posited that the benefits of observation methodology and the wealth of data that can be obtained using such methods can compliment prior research. In addition, the use of observation methods cannot only enhance our understanding of the consumer emotion experience but also enable us to collaborate with researchers from other fields in order to make progress in understanding emotion. Originality/value - This paper brings perspectives and methods together to provide an up to date consideration of emotion research for marketers. In order to generate valuable research in this area there is an identified need for discussion and implementation of the observation techniques available to marketing researchers working in this field. An evaluation of a variety of methods is undertaken as a point to start discussion or consideration of different observation techniques and how they can be utilised.
Resumo:
Matrix application continues to be a critical step in sample preparation for matrix-assisted laser desorption/ionization (MALDI) mass spectrometry imaging (MSI). Imaging of small molecules such as drugs and metabolites is particularly problematic because the commonly used washing steps to remove salts are usually omitted as they may also remove the analyte, and analyte spreading is more likely with conventional wet matrix application methods. We have developed a method which uses the application of matrix as a dry, finely divided powder, here referred to as dry matrix application, for the imaging of drug compounds. This appears to offer a complementary method to wet matrix application for the MALDI-MSI of small molecules, with the alternative matrix application techniques producing different ion profiles, and allows the visualization of compounds not observed using wet matrix application methods. We demonstrate its value in imaging clozapine from rat kidney and 4-bromophenyl-1,4-diazabicyclo(3.2.2)nonane-4-carboxylic acid from rat brain. In addition, exposure of the dry matrix coated sample to a saturated moist atmosphere appears to enhance the visualization of a different set of molecules.
Resumo:
Liposomes have been imaged using a plethora of techniques. However, few of these methods offer the ability to study these systems in their natural hydrated state without the requirement of drying, staining, and fixation of the vesicles. However, the ability to image a liposome in its hydrated state is the ideal scenario for visualization of these dynamic lipid structures and environmental scanning electron microscopy (ESEM), with its ability to image wet systems without prior sample preparation, offers potential advantages to the above methods. In our studies, we have used ESEM to not only investigate the morphology of liposomes and niosomes but also to dynamically follow the changes in structure of lipid films and liposome suspensions as water condenses on to or evaporates from the sample. In particular, changes in liposome morphology were studied using ESEM in real time to investigate the resistance of liposomes to coalescence during dehydration thereby providing an alternative assay of liposome formulation and stability. Based on this protocol, we have also studied niosome-based systems and cationic liposome/DNA complexes. Copyright © Informa Healthcare.
Resumo:
Reproducible preparation of a number of modified clay and clay~like materials by both conventional and microwave-assisted chemistry, and their subsequent characterisation, has been achieved, These materials are designed as hydrocracking catalysts for the upgrading of liquids obtained by the processing of coal. Contact with both coal derived liquids and heavy petroleum resids has demonstrated that these catalysts are superior to established proprietary catalysts in terms of both initial activity and deactivation resistance, Of particular activity were a chromium-pillared montmorillonite and a tin intercalated laponite, Layered Double Hydroxides (LDH's) have exhibited encouraging thermal stability. Development of novel methods for hydrocracking coal derived liquids, using a commercial microwave oven, modified reaction vessels and coal model compounds has been attempted. Whilst safe and reliable operation of a high pressure microwave "bomb" apparatus employing hydrogen, has been achieved, no hydrotreatment reactions occurred,
Resumo:
This thesis examines the present provisions for pre-conception care and the views of the providers of services. Pre-conception care is seen by some clinicians and health educators as a means of making any necessary changes in life style, corrections to imbalances in the nutritional status of the prospective mother (and father) and the assessment of any medical problems, thus maximizing the likelihood of the normal development of the baby. Pre-conception care may be described as a service to bridge the gap between the family planning clinic and the first ante-natal booking appointment. There were three separate foci for the empirical research - the Foresight organisation (a charity which has pioneered pre-conception care in Britain); the pre-conception care clinic at the West London Hospital, Hammersmith; and the West Midlands Regional Health Authority. The six main sources of data were: twenty five clinicians operating Foresight pre-conception clinics, couples attending pre-conception clinics, committee members of the Foresight organisation, staff of the West London Hospital pre-conception clinic, Hammersmith, District Health Education Officers working in the West Midlands Regional Health Authority and the members of the Ante-Natal Care Action Group, a sub-group of the Regional Health Advisory Group on Health Promotion and Preventive Medicine. A range of research methods were adopted. These were as follows: questionnaires and report forms used in co-operation with the Foresight clinicians, interviews, participant observation discussions and informal meetings and, finally, literature and official documentation. The research findings illustrated that pre-conception care services provided at the predominantly private Foresight clinics were of a rather `ad hoc' nature. The type of provision varied considerably and clearly reflected the views held by its providers. The protocol which had been developed to assist in the standardization of results was not followed by the clinicians. The pre-conception service provided at the West London Hospital shared some similarities in its approach with the Foresight provision; a major difference was that it did not advocate the use of routine hair trace metal analysis. Interviews with District Health Education Officers and with members of the Ante Natal Care Action Group revealed a tentative and cautious approach to pre-conception care generally and to the Foresight approach in particular. The thesis concludes with a consideration of the future of pre-conception care and the prospects for the establishment of a comprehensive pre-conception care service.
Resumo:
A combination of experimental methods was applied at a clogged, horizontal subsurface flow (HSSF) municipal wastewater tertiary treatment wetland (TW) in the UK, to quantify the extent of surface and subsurface clogging which had resulted in undesirable surface flow. The three dimensional hydraulic conductivity profile was determined, using a purpose made device which recreates the constant head permeameter test in-situ. The hydrodynamic pathways were investigated by performing dye tracing tests with Rhodamine WT and a novel multi-channel, data-logging, flow through Fluorimeter which allows synchronous measurements to be taken from a matrix of sampling points. Hydraulic conductivity varied in all planes, with the lowest measurement of 0.1 md1 corresponding to the surface layer at the inlet, and the maximum measurement of 1550 md1 located at a 0.4m depth at the outlet. According to dye tracing results, the region where the overland flow ceased received five times the average flow, which then vertically short-circuited below the rhizosphere. The tracer break-through curve obtained from the outlet showed that this preferential flow-path accounted for approximately 80% of the flow overall and arrived 8 h before a distinctly separate secondary flow-path. The overall volumetric efficiencyof the clogged system was 71% and the hydrology was simulated using a dual-path, dead-zone storage model. It is concluded that uneven inlet distribution, continuous surface loading and high rhizosphere resistance is responsible for the clog formation observed in this system. The average inlet hydraulic conductivity was 2 md1, suggesting that current European design guidelines, which predict that the system will reach an equilibrium hydraulic conductivity of 86 md1, do not adequately describe the hydrology of mature systems.
Resumo:
Purpose. We describe the profile and associations of anisometropia and aniso-astigmatism in a population-based sample of children. Methods. The Northern Ireland Childhood Errors of Refraction (NICER) study used a stratified random cluster design to recruit a representative sample of children from schools in Northern Ireland. Examinations included cycloplegic (1% cyclopentolate) autorefraction, and measures of axial length, anterior chamber depth, and corneal curvature. ?2 tests were used to assess variations in the prevalence of anisometropia and aniso-astigmatism by age group, with logistic regression used to compare odds of anisometropia and aniso-astigmatism with refractive status (myopia, emmetropia, hyperopia). The Mann-Whitney U test was used to examine interocular differences in ocular biometry. Results. Data from 661 white children aged 12 to 13 years (50.5% male) and 389 white children aged 6 to 7 years (49.6% male) are presented. The prevalence of anisometropia =1 diopters sphere (DS) did not differ statistically significantly between 6- to 7-year-old (8.5%; 95% confidence interval [CI], 3.9–13.1) and 12- to 13-year-old (9.4%; 95% CI, 5.9–12.9) children. The prevalence of aniso-astigmatism =1 diopters cylinder (DC) did not vary statistically significantly between 6- to 7-year-old (7.7%; 95% CI, 4.3–11.2) and 12- to 13-year-old (5.6%; 95% CI, 0.5–8.1) children. Anisometropia and aniso-astigmatism were more common in 12- to 13-year-old children with hyperopia =+2 DS. Anisometropic eyes had greater axial length asymmetry than nonanisometropic eyes. Aniso-astigmatic eyes were more asymmetric in axial length and corneal astigmatism than eyes without aniso-astigmatism. Conclusions. In this population, there is a high prevalence of axial anisometropia and corneal/axial aniso-astigmatism, associated with hyperopia, but whether these relations are causal is unclear. Further work is required to clarify the developmental mechanism behind these associations.
Resumo:
We present a parallel genetic algorithm for nding matrix multiplication algo-rithms. For 3 x 3 matrices our genetic algorithm successfully discovered algo-rithms requiring 23 multiplications, which are equivalent to the currently best known human-developed algorithms. We also studied the cases with less mul-tiplications and evaluated the suitability of the methods discovered. Although our evolutionary method did not reach the theoretical lower bound it led to an approximate solution for 22 multiplications.
Resumo:
Although crisp data are fundamentally indispensable for determining the profit Malmquist productivity index (MPI), the observed values in real-world problems are often imprecise or vague. These imprecise or vague data can be suitably characterized with fuzzy and interval methods. In this paper, we reformulate the conventional profit MPI problem as an imprecise data envelopment analysis (DEA) problem, and propose two novel methods for measuring the overall profit MPI when the inputs, outputs, and price vectors are fuzzy or vary in intervals. We develop a fuzzy version of the conventional MPI model by using a ranking method, and solve the model with a commercial off-the-shelf DEA software package. In addition, we define an interval for the overall profit MPI of each decision-making unit (DMU) and divide the DMUs into six groups according to the intervals obtained for their overall profit efficiency and MPIs. We also present two numerical examples to demonstrate the applicability of the two proposed models and exhibit the efficacy of the procedures and algorithms. © 2011 Elsevier Ltd.
Resumo:
Recently, we have developed the hierarchical Generative Topographic Mapping (HGTM), an interactive method for visualization of large high-dimensional real-valued data sets. In this paper, we propose a more general visualization system by extending HGTM in three ways, which allows the user to visualize a wider range of data sets and better support the model development process. 1) We integrate HGTM with noise models from the exponential family of distributions. The basic building block is the Latent Trait Model (LTM). This enables us to visualize data of inherently discrete nature, e.g., collections of documents, in a hierarchical manner. 2) We give the user a choice of initializing the child plots of the current plot in either interactive, or automatic mode. In the interactive mode, the user selects "regions of interest," whereas in the automatic mode, an unsupervised minimum message length (MML)-inspired construction of a mixture of LTMs is employed. The unsupervised construction is particularly useful when high-level plots are covered with dense clusters of highly overlapping data projections, making it difficult to use the interactive mode. Such a situation often arises when visualizing large data sets. 3) We derive general formulas for magnification factors in latent trait models. Magnification factors are a useful tool to improve our understanding of the visualization plots, since they can highlight the boundaries between data clusters. We illustrate our approach on a toy example and evaluate it on three more complex real data sets. © 2005 IEEE.
Resumo:
Abstract Oxidation of proteins has received a lot of attention in the last decades due to the fact that they have been shown to accumulate and to be implicated in the progression and the patho-physiology of several diseases such as Alzheimer, coronary heart diseases, etc. This has also resulted in the fact that research scientist became more eager to be able to measure accurately the level of oxidized protein in biological materials, and to determine the precise site of the oxidative attack on the protein, in order to get insights into the molecular mechanisms involved in the progression of diseases. Several methods for measuring protein carbonylation have been implemented in different laboratories around the world. However, to date no methods prevail as the most accurate, reliable and robust. The present paper aims at giving an overview of the common methods used to determine protein carbonylation in biological material as well as to highlight the limitations and the potential. The ultimate goal is to give quick tips for a rapid decision making when a method has to be selected and taking into consideration the advantage and drawback of the methods.
Resumo:
* The work is partially supported by Grant no. NIP917 of the Ministry of Science and Education – Republic of Bulgaria.