884 resultados para Textual analysis Content analysis
Resumo:
Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.
Resumo:
The meltabilities of 14 process cheese samples were determined at 2 and 4 weeks after manufacture using sensory analysis, a computer vision method, and the Olson and Price test. Sensory analysis meltability correlated with both computer vision meltability (R-2 = 0.71, P < 0.001) and Olson and Price meltability (R-2 = 0.69, P < 0.001). There was a marked lack of correlation between the computer vision method and the Olson and Price test. This study showed that the Olson and Price test gave greater repeatability than the computer vision method. Results showed process cheese meltability decreased with increasing inorganic salt content and with lower moisture/fat ratios. There was very little evidence in this study to show that process cheese meltability changed between 2 and 4 weeks after manufacture..
Resumo:
Consumer studies of meat have tended to use quantitative methodologies providing a wealth of statistically malleable information, but little in-depth insight into consumer perceptions of meat. The aim of the present study was therefore, to understand factors perceived important in the selection of chicken meat, using qualitative methodology. Focus group discussions were tape recorded, transcribed verbatim and content analysed for major themes. Themes arising implied that “appearance” and “convenience” were the most important determinants of choice of chicken meat and these factors appeared to be associated with perceptions of freshness, healthiness, product versatility and concepts of value. A descriptive model has been developed to illustrate the interrelationship between factors affecting chicken meat choice. This study indicates that those involved in the production and retailing of chicken products should concentrate upon product appearance and convenience as market drivers for their products.
Resumo:
The butanol-HCl spectrophotometric assay is widely used for quantifying extractable and insoluble condensed tannins (CT, syn. proanthocyanidins) in foods, feeds, and foliage of herbaceous and woody plants, but the method underestimates total CT content when applied directly to plant material. To improve CT quantitation, we tested various cosolvents with butanol-HCl and found that acetone increased anthocyanidin yields from two forage Lotus species having contrasting procyanidin and prodelphinidin compositions. A butanol-HCl-iron assay run with 50% (v/v) acetone gave linear responses with Lotus CT standards and increased estimates of total CT in Lotus herbage and leaves by up to 3.2-fold over the conventional method run without acetone. The use of thiolysis to determine the purity of CT standards further improved quantitation. Gel-state 13C and 1H–13C HSQC NMR spectra of insoluble residues collected after butanol-HCl assays revealed that acetone increased anthocyanidin yields by facilitating complete solubilization of CT from tissue.
Resumo:
Global NDVI data are routinely derived from the AVHRR, SPOT-VGT, and MODIS/Terra earth observation records for a range of applications from terrestrial vegetation monitoring to climate change modeling. This has led to a substantial interest in the harmonization of multisensor records. Most evaluations of the internal consistency and continuity of global multisensor NDVI products have focused on time-series harmonization in the spectral domain, often neglecting the spatial domain. We fill this void by applying variogram modeling (a) to evaluate the differences in spatial variability between 8-km AVHRR, 1-km SPOT-VGT, and 1-km, 500-m, and 250-m MODIS NDVI products over eight EOS (Earth Observing System) validation sites, and (b) to characterize the decay of spatial variability as a function of pixel size (i.e. data regularization) for spatially aggregated Landsat ETM+ NDVI products and a real multisensor dataset. First, we demonstrate that the conjunctive analysis of two variogram properties – the sill and the mean length scale metric – provides a robust assessment of the differences in spatial variability between multiscale NDVI products that are due to spatial (nominal pixel size, point spread function, and view angle) and non-spatial (sensor calibration, cloud clearing, atmospheric corrections, and length of multi-day compositing period) factors. Next, we show that as the nominal pixel size increases, the decay of spatial information content follows a logarithmic relationship with stronger fit value for the spatially aggregated NDVI products (R2 = 0.9321) than for the native-resolution AVHRR, SPOT-VGT, and MODIS NDVI products (R2 = 0.5064). This relationship serves as a reference for evaluation of the differences in spatial variability and length scales in multiscale datasets at native or aggregated spatial resolutions. The outcomes of this study suggest that multisensor NDVI records cannot be integrated into a long-term data record without proper consideration of all factors affecting their spatial consistency. Hence, we propose an approach for selecting the spatial resolution, at which differences in spatial variability between NDVI products from multiple sensors are minimized. This approach provides practical guidance for the harmonization of long-term multisensor datasets.
Resumo:
Potassium (K) fertilizers are used in intensive and extensive agricultural systems to maximize production. However, there are both financial and environmental costs to K-fertilization. It is therefore important to optimize the efficiency with which K-fertilizers are used. Cultivating crops that acquire and/or utilize K more effectively can reduce the use of K-fertilizers. The aim of the present study was to determine the genetic factors affecting K utilization efficiency (KUtE), defined as the reciprocal of shoot K concentration (1/K(shoot)), and K acquisition efficiency (KUpE), defined as shoot K content, in Brassica oleracea. Genetic variation in K(shoot) was estimated using a structured diversity foundation set (DFS) of 376 accessions and in 74 commercial genotypes grown in glasshouse and field experiments that included phosphorus (P) supply as a treatment factor. Chromosomal quantitative trait loci (QTL) associated with K(shoot) and KUpE were identified using a genetic mapping population grown in the glasshouse and field. Putative QTL were tested using recurrent backcross substitution lines in the glasshouse. More than two-fold variation in K(shoot) was observed among DFS accessions grown in the glasshouse, a significant proportion of which could be attributed to genetic factors. Several QTL associated with K(shoot) were identified, which, despite a significant correlation in K(shoot) among genotypes grown in the glasshouse and field, differed between these two environments. A QTL associated with K(shoot) in glasshouse-grown plants (chromosome C7 at 62 center dot 2 cM) was confirmed using substitution lines. This QTL corresponds to a segment of arabidopsis chromosome 4 containing genes encoding the K(+) transporters AtKUP9, AtAKT2, AtKAT2 and AtTPK3. There is sufficient genetic variation in B. oleracea to breed for both KUtE and KUpE. However, as QTL associated with these traits differ between glasshouse and field environments, marker-assisted breeding programmes must consider carefully the conditions under which the crop will be grown.
Resumo:
In this paper we consider the structure of dynamically evolving networks modelling information and activity moving across a large set of vertices. We adopt the communicability concept that generalizes that of centrality which is defined for static networks. We define the primary network structure within the whole as comprising of the most influential vertices (both as senders and receivers of dynamically sequenced activity). We present a methodology based on successive vertex knockouts, up to a very small fraction of the whole primary network,that can characterize the nature of the primary network as being either relatively robust and lattice-like (with redundancies built in) or relatively fragile and tree-like (with sensitivities and few redundancies). We apply these ideas to the analysis of evolving networks derived from fMRI scans of resting human brains. We show that the estimation of performance parameters via the structure tests of the corresponding primary networks is subject to less variability than that observed across a very large population of such scans. Hence the differences within the population are significant.
Resumo:
The purpose of this paper is to explore how companies that hold carbon trading accounts under European Union Emissions Trading Scheme (EU ETS) respond to the climate change by using disclosures on carbon emissions as a means to generate legitimacy compared to others. The study is based on disclosures made in annual reports and stand-alone sustainability reports of UK listed companies from 2001- 2012. The study uses content analysis to capture both the quality and volume of the carbon disclosures. The results show that there is a significant increase in both the quality and volume of the carbon disclosures after the launch of EU ETS. Companies with carbon trading accounts provide greater detailed disclosures as compared to the others without an account. We also find that company size is positively correlated with the disclosures while the association with the industry produces an inconclusive result.
Resumo:
Social tagging has become very popular around the Internet as well as in research. The main idea behind tagging is to allow users to provide metadata to the web content from their perspective to facilitate categorization and retrieval. There are many factors that influence users' tag choice. Many studies have been conducted to reveal these factors by analysing tagging data. This paper uses two theories to identify these factors, namely the semiotics theory and activity theory. The former treats tags as signs and the latter treats tagging as an activity. The paper uses both theories to analyse tagging behaviour by explaining all aspects of a tagging system, including tags, tagging system components and the tagging activity. The theoretical analysis produced a framework that was used to identify a number of factors. These factors can be considered as categories that can be consulted to redirect user tagging choice in order to support particular tagging behaviour, such as cross-lingual tagging.
Resumo:
Tagging provides support for retrieval and categorization of online content depending on users' tag choice. A number of models of tagging behaviour have been proposed to identify factors that are considered to affect taggers, such as users' tagging history. In this paper, we use Semiotics Analysis and Activity theory, to study the effect the system designer has over tagging behaviour. The framework we use shows the components that comprise the tagging system and how they interact together to direct tagging behaviour. We analysed two collaborative tagging systems: CiteULike and Delicious by studying their components by applying our framework. Using datasets from both systems, we found that 35% of CiteULike users did not provide tags compared to only 0.1% of Delicious users. This was directly linked to the type of tools used by the system designer to support tagging.
Resumo:
The present study aims to contribute to an understanding of the complexity of lobbying activities within the accounting standard-setting process in the UK. The paper reports detailed content analysis of submission letters to four related exposure drafts. These preceded two accounting standards that set out the concept of control used to determine the scope of consolidation in the UK, except for reporting under international standards. Regulation on the concept of control provides rich patterns of lobbying behaviour due to its controversial nature and its significance to financial reporting. Our examination is conducted by dividing lobbyists into two categories, corporate and non-corporate, which are hypothesised (and demonstrated) to lobby differently. In order to test the significance of these differences we apply ANOVA techniques and univariate regression analysis. Corporate respondents are found to devote more attention to issues of specific applicability of the concept of control, whereas non-corporate respondents tend to devote more attention to issues of general applicability of this concept. A strong association between the issues raised by corporate respondents and their line of business is revealed. Both categories of lobbyists are found to advance conceptually-based arguments more often than economic consequences-based or combined arguments. However, when economic consequences-based arguments are used, they come exclusively from the corporate category of respondents.
Resumo:
This paper presents the development of a rapid method with ultraperformance liquid chromatography–tandem mass spectrometry (UPLC-MS/MS) for the qualitative and quantitative analyses of plant proanthocyanidins directly from crude plant extracts. The method utilizes a range of cone voltages to achieve the depolymerization step in the ion source of both smaller oligomers and larger polymers. The formed depolymerization products are further fragmented in the collision cell to enable their selective detection. This UPLC-MS/MS method is able to separately quantitate the terminal and extension units of the most common proanthocyanidin subclasses, that is, procyanidins and prodelphinidins. The resulting data enable (1) quantitation of the total proanthocyanidin content, (2) quantitation of total procyanidins and prodelphinidins including the procyanidin/prodelphinidin ratio, (3) estimation of the mean degree of polymerization for the oligomers and polymers, and (4) estimation of how the different procyanidin and prodelphinidin types are distributed along the chromatographic hump typically produced by large proanthocyanidins. All of this is achieved within the 10 min period of analysis, which makes the presented method a significant addition to the chemistry tools currently available for the qualitative and quantitative analyses of complex proanthocyanidin mixtures from plant extracts.
Resumo:
This paper presents the PETS2009 outdoor crowd image analysis surveillance dataset and the performance evaluation of people counting, detection and tracking results using the dataset submitted to five IEEE Performance Evaluation of Tracking and Surveillance (PETS) workshops. The evaluation was carried out using well established metrics developed in the Video Analysis and Content Extraction (VACE) programme and the CLassification of Events, Activities, and Relationships (CLEAR) consortium. The comparative evaluation highlights the detection and tracking performance of the authors’ systems in areas such as precision, accuracy and robustness and provides a brief analysis of the metrics themselves to provide further insights into the performance of the authors’ systems.
Resumo:
Understanding the nature of air parcels that exhibit ice-supersaturation is important because they are the regions of potential formation of both cirrus and aircraft contrails, which affect the radiation balance. Ice-supersaturated air parcels in the upper troposphere and lower stratosphere over the North Atlantic are investigated using Lagrangian trajectories. The trajectory calculations use ERA-Interim data for three winter and three summer seasons, resulting in approximately 200,000 trajectories with ice-supersaturation for each season. For both summer and winter, the median duration of ice-supersaturation along a trajectory is less than 6 hours. 5% of air which becomes ice-supersaturated in the troposphere, and 23% of air which becomes ice-supersaturated in the stratosphere will remain ice-supersaturated for at least 24 hours. Weighting the ice-supersaturation duration with the observed frequency indicates the likely overall importance of the longer duration ice-supersaturated trajectories. Ice-supersaturated air parcels typically experience a decrease in moisture content while ice-supersaturated, suggesting that cirrus clouds eventually form in the majority of such air. A comparison is made between short-lived (less than 24 h) and long-lived (greater than 24 h) ice-supersaturated air flows. For both air flows, ice-supersaturation occurs around the northernmost part of the trajectory. Short-lived ice-supersaturated air flows show no significant differences in speed or direction of movement to subsaturated air parcels. However, long-lived ice-supersaturated air occurs in slower moving air flows, which implies that they are not associated with the fastest moving air through a jet stream.