926 resultados para METHOD OF ANALYSIS
Resumo:
Differential protein expression analysis based on modification of selected amino acids with labelling reagents has become the major method of choice for quantitative proteomics. One such methodology, two-dimensional difference gel electrophoresis (2-D DIGE), uses a matched set of fluorescent N-hydroxysuccinimidyl (NHS) ester cyanine dyes to label lysine residues in different samples which can be run simultaneously on the same gels. Here we report the use of iodoacetylated cyanine (ICy) dyes (for labelling of cysteine thiols, for 2-D DIGE-based redox proteomics. Characterisation of ICy dye labelling in relation to its stoichiometry, sensitivity and specificity is described, as well as comparison of ICy dye with NHS-Cy dye labelling and several protein staining methods. We have optimised conditions for labelling of nonreduced, denatured samples and report increased sensitivity for a subset of thiol-containing proteins, allowing accurate monitoring of redox-dependent thiol modifications and expression changes, Cysteine labelling was then combined with lysine labelling in a multiplex 2-D DIGE proteomic study of redox-dependent and ErbB2-dependent changes in epithelial cells exposed to oxidative stress. This study identifies differentially modified proteins involved in cellular redox regulation, protein folding, proliferative suppression, glycolysis and cytoskeletal organisation, revealing the complexity of the response to oxidative stress and the impact that overexpression of ErbB2 has on this response.
Resumo:
A first step in interpreting the wide variation in trace gas concentrations measured over time at a given site is to classify the data according to the prevailing weather conditions. In order to classify measurements made during two intensive field campaigns at Mace Head, on the west coast of Ireland, an objective method of assigning data to different weather types has been developed. Air-mass back trajectories calculated using winds from ECMWF analyses, arriving at the site in 1995–1997, were allocated to clusters based on a statistical analysis of the latitude, longitude and pressure of the trajectory at 12 h intervals over 5 days. The robustness of the analysis was assessed by using an ensemble of back trajectories calculated for four points around Mace Head. Separate analyses were made for each of the 3 years, and for four 3-month periods. The use of these clusters in classifying ground-based ozone measurements at Mace Head is described, including the need to exclude data which have been influenced by local perturbations to the regional flow pattern, for example, by sea breezes. Even with a limited data set, based on 2 months of intensive field measurements in 1996 and 1997, there are statistically significant differences in ozone concentrations in air from the different clusters. The limitations of this type of analysis for classification and interpretation of ground-based chemistry measurements are discussed.
Resumo:
In this paper we investigate the commonly used autoregressive filter method of adjusting appraisal-based real estate returns to correct for the perceived biases induced in the appraisal process. Since the early work by Geltner (1989), many papers have been written on this topic but remarkably few have considered the relationship between smoothing at the individual property level and the amount of persistence in the aggregate appraised-based index. To investigate this issue in more detail we analyse a sample of individual property level appraisal data from the Investment Property Database (IPD). We find that commonly used unsmoothing estimates overstate the extent of smoothing that takes place at the individual property level. There is also strong support for an ARFIMA representation of appraisal returns.
Resumo:
In this article, we investigate the commonly used autoregressive filter method of adjusting appraisal-based real estate returns to correct for the perceived biases induced in the appraisal process. Many articles have been written on appraisal smoothing but remarkably few have considered the relationship between smoothing at the individual property level and the amount of persistence in the aggregate appraisal-based index. To investigate this issue we analyze a large sample of appraisal data at the individual property level from the Investment Property Databank. We find that commonly used unsmoothing estimates at the index level overstate the extent of smoothing that takes place at the individual property level. There is also strong support for an ARFIMA representation of appraisal returns at the index level and an ARMA model at the individual property level.
Resumo:
The adaptive thermal comfort theory considers people as active rather than passive recipients in response to ambient physical thermal stimuli, in contrast with conventional, heat-balance-based, thermal comfort theory. Occupants actively interact with the environments they occupy by means of utilizing adaptations in terms of physiological, behavioural and psychological dimensions to achieve ‘real world’ thermal comfort. This paper introduces a method of quantifying the physiological, behavioural and psychological portions of the adaptation process by using the analytic hierarchy process (AHP) based on the case studies conducted in the UK and China. Apart from three categories of adaptations which are viewed as criteria, six possible alternatives are considered: physiological indices/health status, the indoor environment, the outdoor environment, personal physical factors, environmental control and thermal expectation. With the AHP technique, all the above-mentioned criteria, factors and corresponding elements are arranged in a hierarchy tree and quantified by using a series of pair-wise judgements. A sensitivity analysis is carried out to improve the quality of these results. The proposed quantitative weighting method provides researchers with opportunities to better understand the adaptive mechanisms and reveal the significance of each category for the achievement of adaptive thermal comfort.
Resumo:
In this paper, we propose a new velocity constraint type for Redundant Drive Wire Mechanisms. The purpose of this paper is to demonstrate that the proposed velocity constraint module can fix the orientation of the movable part and to use the kinematical analysis method to obtain the moving direction of the movable part. First, we discuss the necessity of using this velocity constraint type and the possible applications of the proposed mechanism. Second, we derive the basic equations of a wire mechanism with this constraint type. Next, we present a method of motion analysis on active and passive constraint spaces, which is used to find the moving direction of a movable part. Finally, we apply the above analysis method on a wire mechanism with a velocity constraint module and on a wire mechanism with four double actuator modules. By evaluating the results, we prove the validity of the proposed constraint type.
Resumo:
In this paper, we develop a method, termed the Interaction Distribution (ID) method, for analysis of quantitative ecological network data. In many cases, quantitative network data sets are under-sampled, i.e. many interactions are poorly sampled or remain unobserved. Hence, the output of statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks. The ID method can support assessment and inference of under-sampled ecological network data. In the current paper, we illustrate and discuss the ID method based on the properties of plant-animal pollination data sets of flower visitation frequencies. However, the ID method may be applied to other types of ecological networks. The method can supplement existing network analyses based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1), pi,j: the probability for a visit made by the i’th pollinator species to take place on the j’th plant species; (2), qi,j: the probability for a visit received by the j’th plant species to be made by the i’th pollinator. The method applies the Dirichlet distribution to estimate these two probabilities, based on a given empirical data set. The estimated mean values for pi,j and qi,j reflect the relative differences between recorded numbers of visits for different pollinator and plant species, and the estimated uncertainty of pi,j and qi,j decreases with higher numbers of recorded visits.
Resumo:
Mean field models (MFMs) of cortical tissue incorporate salient, average features of neural masses in order to model activity at the population level, thereby linking microscopic physiology to macroscopic observations, e.g., with the electroencephalogram (EEG). One of the common aspects of MFM descriptions is the presence of a high-dimensional parameter space capturing neurobiological attributes deemed relevant to the brain dynamics of interest. We study the physiological parameter space of a MFM of electrocortical activity and discover robust correlations between physiological attributes of the model cortex and its dynamical features. These correlations are revealed by the study of bifurcation plots, which show that the model responses to changes in inhibition belong to two archetypal categories or “families”. After investigating and characterizing them in depth, we discuss their essential differences in terms of four important aspects: power responses with respect to the modeled action of anesthetics, reaction to exogenous stimuli such as thalamic input, and distributions of model parameters and oscillatory repertoires when inhibition is enhanced. Furthermore, while the complexity of sustained periodic orbits differs significantly between families, we are able to show how metamorphoses between the families can be brought about by exogenous stimuli. We here unveil links between measurable physiological attributes of the brain and dynamical patterns that are not accessible by linear methods. They instead emerge when the nonlinear structure of parameter space is partitioned according to bifurcation responses. We call this general method “metabifurcation analysis”. The partitioning cannot be achieved by the investigation of only a small number of parameter sets and is instead the result of an automated bifurcation analysis of a representative sample of 73,454 physiologically admissible parameter sets. Our approach generalizes straightforwardly and is well suited to probing the dynamics of other models with large and complex parameter spaces.
Resumo:
Lipid cubic phases are complex nanostructures that form naturally in a variety of biological systems, with applications including drug delivery and nanotemplating. Most X-ray scattering studies on lipid cubic phases have used unoriented polydomain samples as either bulk gels or suspensions of micrometer-sized cubosomes. We present a method of investigating cubic phases in a new form, as supported thin films that can be analyzed using grazing incidence small-angle X-ray scattering (GISAXS). We present GISAXS data on three lipid systems: phytantriol and two grades of monoolein (research and industrial). The use of thin films brings a number of advantages. First, the samples exhibit a high degree of uniaxial orientation about the substrate normal. Second, the new morphology allows precise control of the substrate mesophase geometry and lattice parameter using a controlled temperature and humidity environment, and we demonstrate the controllable formation of oriented diamond and gyroid inverse bicontinuous cubic along with lamellar phases. Finally, the thin film morphology allows the induction of reversible phase transitions between these mesophase structures by changes in humidity on subminute time scales, and we present timeresolved GISAXS data monitoring these transformations.
Resumo:
This paper presents the development of a rapid method with ultraperformance liquid chromatography–tandem mass spectrometry (UPLC-MS/MS) for the qualitative and quantitative analyses of plant proanthocyanidins directly from crude plant extracts. The method utilizes a range of cone voltages to achieve the depolymerization step in the ion source of both smaller oligomers and larger polymers. The formed depolymerization products are further fragmented in the collision cell to enable their selective detection. This UPLC-MS/MS method is able to separately quantitate the terminal and extension units of the most common proanthocyanidin subclasses, that is, procyanidins and prodelphinidins. The resulting data enable (1) quantitation of the total proanthocyanidin content, (2) quantitation of total procyanidins and prodelphinidins including the procyanidin/prodelphinidin ratio, (3) estimation of the mean degree of polymerization for the oligomers and polymers, and (4) estimation of how the different procyanidin and prodelphinidin types are distributed along the chromatographic hump typically produced by large proanthocyanidins. All of this is achieved within the 10 min period of analysis, which makes the presented method a significant addition to the chemistry tools currently available for the qualitative and quantitative analyses of complex proanthocyanidin mixtures from plant extracts.
Resumo:
Predictions of twenty-first century sea level change show strong regional variation. Regional sea level change observed by satellite altimetry since 1993 is also not spatially homogenous. By comparison with historical and pre-industrial control simulations using the atmosphere–ocean general circulation models (AOGCMs) of the CMIP5 project, we conclude that the observed pattern is generally dominated by unforced (internal generated) variability, although some regions, especially in the Southern Ocean, may already show an externally forced response. Simulated unforced variability cannot explain the observed trends in the tropical Pacific, but we suggest that this is due to inadequate simulation of variability by CMIP5 AOGCMs, rather than evidence of anthropogenic change. We apply the method of pattern scaling to projections of sea level change and show that it gives accurate estimates of future local sea level change in response to anthropogenic forcing as simulated by the AOGCMs under RCP scenarios, implying that the pattern will remain stable in future decades. We note, however, that use of a single integration to evaluate the performance of the pattern-scaling method tends to exaggerate its accuracy. We find that ocean volume mean temperature is generally a better predictor than global mean surface temperature of the magnitude of sea level change, and that the pattern is very similar under the different RCPs for a given model. We determine that the forced signal will be detectable above the noise of unforced internal variability within the next decade globally and may already be detectable in the tropical Atlantic.
Resumo:
The large pine weevil, Hylobius abietis, is a serious pest of reforestation in northern Europe. However, weevils developing in stumps of felled trees can be killed by entomopathogenic nematodes applied to soil around the stumps and this method of control has been used at an operational level in the UK and Ireland. We investigated the factors affecting the efficacy of entomopathogenic nematodes in the control of the large pine weevil spanning 10 years of field experiments, by means of a meta-analysis of published studies and previously unpublished data. We investigated two species with different foraging strategies, the ‘ambusher’ Steinernema carpocapsae, the species most often used at an operational level, and the ‘cruiser’ Heterorhabditis downesi. Efficacy was measured both by percentage reduction in numbers of adults emerging relative to untreated controls and by percentage parasitism of developing weevils in the stump. Both measures were significantly higher with H. downesi compared to S. carpocapsae. General linear models were constructed for each nematode species separately, using substrate type (peat versus mineral soil) and tree species (pine versus spruce) as fixed factors, weevil abundance (from the mean of untreated stumps) as a covariate and percentage reduction or percentage parasitism as the response variable. For both nematode species, the most significant and parsimonious models showed that substrate type was consistently, but not always, the most significant variable, whether replicates were at a site or stump level, and that peaty soils significantly promote the efficacy of both species. Efficacy, in terms of percentage parasitism, was not density dependent.
Resumo:
Dietary assessment in older adults can be challenging. The Novel Assessment of Nutrition and Ageing (NANA) method is a touch-screen computer-based food record that enables older adults to record their dietary intakes. The objective of the present study was to assess the relative validity of the NANA method for dietary assessment in older adults. For this purpose, three studies were conducted in which a total of ninety-four older adults (aged 65–89 years) used the NANA method of dietary assessment. On a separate occasion, participants completed a 4 d estimated food diary. Blood and 24 h urine samples were also collected from seventy-six of the volunteers for the analysis of biomarkers of nutrient intake. The results from all the three studies were combined, and nutrient intake data collected using the NANA method were compared against the 4 d estimated food diary and biomarkers of nutrient intake. Bland–Altman analysis showed a reasonable agreement between the dietary assessment methods for energy and macronutrient intake; however, there were small, but significant, differences for energy and protein intake, reflecting the tendency for the NANA method to record marginally lower energy intakes. Significant positive correlations were observed between urinary urea and dietary protein intake using both the NANA and the 4 d estimated food diary methods, and between plasma ascorbic acid and dietary vitamin C intake using the NANA method. The results demonstrate the feasibility of computer-based dietary assessment in older adults, and suggest that the NANA method is comparable to the 4 d estimated food diary, and could be used as an alternative to the food diary for the short-term assessment of an individual’s dietary intake.
Resumo:
Background: The method of porosity analysis by water absorption has been carried out by the storage of the specimens in pure water, but it does not exclude the potential plasticising effect of the water generating unreal values of porosity. Objective: The present study evaluated the reliability of this method of porosity analysis in polymethylmethacrylate denture base resins by the determination of the most satisfactory solution for storage (S), where the plasticising effect was excluded. Materials and methods: Two specimen shapes (rectangular and maxillary denture base) and two denture base resins, water bath-polymerised (Classico) and microwave-polymerised (Acron MC) were used. Saturated anhydrous calcium chloride solutions (25%, 50%, 75%) and distilled water were used for specimen storage. Sorption isotherms were used to determine S. Porosity factor (PF) and diffusion coefficient (D) were calculated within S and for the groups stored in distilled water. anova and Tukey tests were performed to identify significant differences in PF results and Kruskal-Wallis test and Dunn multiple comparison post hoc test, for D results (alpha = 0.05). Results: For Acron MC denture base shape, FP results were 0.24% (S 50%) and 1.37% (distilled water); for rectangular shape FP was 0.35% (S 75%) and 0.19% (distilled water). For Classico denture base shape, FP results were 0.54% (S 75%) and 1.21% (distilled water); for rectangular shape FP was 0.7% (S 50%) and 1.32% (distilled water). FP results were similar in S and distilled water only for Acron MC rectangular shape (p > 0.05). D results in distilled water were statistically higher than S for all groups. Conclusions: The results of the study suggest that an adequate solution for storing specimens must be used to measure porosity by water absorption, based on excluding the plasticising effect.
Resumo:
Previous to 1970, state and federal agencies held exclusive enforcement responsibilities over the violation of pollution control standards. However, recognizing that the government had neither the time nor resources to provide full enforcement, Congress created citizen suits. Citizen suits, first amended to the Clean Air Act in 1970, authorize citizens to act as private attorney generals and to sue polluters for violating the terms of their operating permits. Since that time, Congress has included citizen suits in 13 other federal statutes. The citizen suit phenomenon is sufficiently new that little is known about it. However, we do know that citizen suits have increased rapidly since the early 1980's. Between 1982 and 1986 the number of citizen suits jumped from 41 to 266. Obviously, they are becoming a widely used method of enforcing the environmental statutes. This paper will provide a detailed description, analysis and evaluation of citizen suits. It will begin with an introduction and will then move on to provide some historic and descriptive background on such issues as how citizen suit powers are delegated, what limitations are placed on the citizens, what parties are on each side of the suit, what citizens can enforce against, and the types of remedies available. The following section of the paper will provide an economic analysis of citizen suits. It will begin with a discussion of non-profit organizations, especially non-profit environmental organizations, detailing the economic factors which instigate their creation and activities. Three models will be developed to investigate the evolution and effects of citizen suits. The first model will provide an analysis of the demand for citizen suits from the point of view of a potential litigator showing how varying remedies, limitations and reimbursement procedures can effect both the level and types of activities undertaken. The second model shows how firm behavior could be expected to respond to citizen suits. Finally, a third model will look specifically at the issue of efficiency to determine whether the introduction of citizen enforcement leads to greater or lesser economic efficiency in pollution control. The database on which the analysis rests consists of 1205 cases compiled by the author. For the purposes of this project this list of citizen suit cases and their attributes were computerized and used to test a series of hypotheses derived from three original economic models. The database includes information regarding plaintiffs, defendants date notice and/or complaint was filed and statutes involved in the claim. The analysis focuses on six federal environmental statutes (Clean Water Act} Resource Conservation and Recovery Act, Comprehensive Environmental Response Compensation and Liability Act, Clean Air Act, Toxic Substances Control Act, and Safe Drinking Water Act) because the majority of citizen suits have occurred under these statutes.