950 resultados para STRUCTURAL QUALITY
Resumo:
Customer satisfaction and retention are key issues for organizations in today’s competitive market place. As such, much research and revenue has been invested in developing accurate ways of assessing consumer satisfaction at both the macro (national) and micro (organizational) level, facilitating comparisons in performance both within and between industries. Since the instigation of the national customer satisfaction indices (CSI), partial least squares (PLS) has been used to estimate the CSI models in preference to structural equation models (SEM) because they do not rely on strict assumptions about the data. However, this choice was based upon some misconceptions about the use of SEM’s and does not take into consideration more recent advances in SEM, including estimation methods that are robust to non-normality and missing data. In this paper, both SEM and PLS approaches were compared by evaluating perceptions of the Isle of Man Post Office Products and Customer service using a CSI format. The new robust SEM procedures were found to be advantageous over PLS. Product quality was found to be the only driver of customer satisfaction, while image and satisfaction were the only predictors of loyalty, thus arguing for the specificity of postal services
Resumo:
This paper develops a structural model which allows estimating the impact of regulatory decisions looking for the setting of download-speed standards on market structure and performance. We characterize a setting under which quality standards improve both service quality and availability. As to quality, we evaluate the impact of quality standards on the performance of local demand from a detailed database of broadband internet subscribers, discriminated by the main attributes of an internet subscription contract as location, supplier, monthly-fee, download- and upload-speed features. From these results, we are able to identify the effect of quality regulation on the behavior of internet providers in a differentiated product market approach. As a consequence, we are able to assert that the response of internet service providers to quality regulation is a more intense product differentiation that contributes to demand expansion and therefore to improve broadband penetration indicators.
Resumo:
The Integrated Catchment Model of Nitrogen (INCA-N) was applied to the River Lambourn, a Chalk river-system in southern England. The model's abilities to simulate the long-term trend and seasonal patterns in observed stream water nitrate concentrations from 1920 to 2003 were tested. This is the first time a semi-distributed, daily time-step model has been applied to simulate such a long time period and then used to calculate detailed catchment nutrient budgets which span the conversion of pasture to arable during the late 1930s and 1940s. Thus, this work goes beyond source apportionment and looks to demonstrate how such simulations can be used to assess the state of the catchment and develop an understanding of system behaviour. The mass-balance results from 1921, 1922, 1991, 2001 and 2002 are presented and those for 1991 are compared to other modelled and literature values of loads associated with nitrogen soil processes and export. The variations highlighted the problem of comparing modelled fluxes with point measurements but proved useful for identifying the most poorly understood inputs and processes thereby providing an assessment of input data and model structural uncertainty. The modelled terrestrial and instream mass-balances also highlight the importance of the hydrological conditions in pollutant transport. Between 1922 and 2002, increased inputs of nitrogen from fertiliser, livestock and deposition have altered the nitrogen balance with a shift from possible reduction in soil fertility but little environmental impact in 1922, to a situation of nitrogen accumulation in the soil, groundwater and instream biota in 2002. In 1922 and 2002 it was estimated that approximately 2 and 18 kg N ha(-1) yr(-1) respectively were exported from the land to the stream. The utility of the approach and further considerations for the best use of models are discussed. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The Integrated Catchment Model of Nitrogen (INCA-N) was applied to the River Lambourn, a Chalk river-system in southern England. The model's abilities to simulate the long-term trend and seasonal patterns in observed stream water nitrate concentrations from 1920 to 2003 were tested. This is the first time a semi-distributed, daily time-step model has been applied to simulate such a long time period and then used to calculate detailed catchment nutrient budgets which span the conversion of pasture to arable during the late 1930s and 1940s. Thus, this work goes beyond source apportionment and looks to demonstrate how such simulations can be used to assess the state of the catchment and develop an understanding of system behaviour. The mass-balance results from 1921, 1922, 1991, 2001 and 2002 are presented and those for 1991 are compared to other modelled and literature values of loads associated with nitrogen soil processes and export. The variations highlighted the problem of comparing modelled fluxes with point measurements but proved useful for identifying the most poorly understood inputs and processes thereby providing an assessment of input data and model structural uncertainty. The modelled terrestrial and instream mass-balances also highlight the importance of the hydrological conditions in pollutant transport. Between 1922 and 2002, increased inputs of nitrogen from fertiliser, livestock and deposition have altered the nitrogen balance with a shift from possible reduction in soil fertility but little environmental impact in 1922, to a situation of nitrogen accumulation in the soil, groundwater and instream biota in 2002. In 1922 and 2002 it was estimated that approximately 2 and 18 kg N ha(-1) yr(-1) respectively were exported from the land to the stream. The utility of the approach and further considerations for the best use of models are discussed. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Background: Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs) have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results: The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion: MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods tested can often be used as effective post filters for re-ranking few models from individual fold recognition servers and further improvements can be achieved using a consensus of these methods.
Resumo:
Blumeria graminis is an economically important obligate plant-pathogenic fungus, whose entire genome was recently sequenced and manually annotated using ab initio in silico predictions [7]. Employing large scale proteogenomic analysis we are now able to verify independently the existence of proteins predicted by 24% of open reading frame models. We compared the haustoria and sporulating hyphae proteomes and identified 71 proteins exclusively in haustoria, the feeding and effector-delivery organs of the pathogen. These proteins are ‘significantly smaller than the rest of the protein pool and predicted to be secreted. Most do not share any similarities with Swiss–Prot or Trembl entries nor possess any identifiable Pfam domains. We used a novel automated prediction pipeline to model the 3D structures of the proteins, identify putative ligand binding sites and predict regions of intrinsic disorder. This revealed that the protein set found exclusively in haustoria is significantly less disordered than the rest of the identified Blumeria proteins or random (and representative) protein sets generated from the yeast proteome. For most of the haustorial proteins with unknown functions no good templates could be found, from which to generate high quality models. Thus, these unknown proteins present potentially new protein folds that can be specific to the interaction of the pathogen with its host.
Resumo:
School effectiveness is a microtechnology of change. It is a relay device, which transfers macro policy into everyday processes and priorities in schools. It is part of the growing apparatus of performance evaluation. Change is brought about by a focus on the school as a site-based system to be managed. There has been corporate restructuring in response to the changing political economy of education. There are now new work regimes and radical changes in organizational cultures. Education, like other public services, is now characterized by a range of structural realignments, new relationships between purchasers and providers and new coalitions between management and politics. In this article, we will argue that the school effectiveness movement is an example of new managerialism in education. It is part of an ideological and technological process to industrialize educational productivity. That is to say, the emphasis on standards and standardization is evocative of production regimes drawn from industry. There is a belief that education, like other public services can be managed to ensure optimal outputs and zero defects in the educational product.
Resumo:
Motivation: The ability of a simple method (MODCHECK) to determine the sequence–structure compatibility of a set of structural models generated by fold recognition is tested in a thorough benchmark analysis. Four Model Quality Assessment Programs (MQAPs) were tested on 188 targets from the latest LiveBench-9 automated structure evaluation experiment. We systematically test and evaluate whether the MQAP methods can successfully detect native-likemodels. Results: We show that compared with the other three methods tested MODCHECK is the most reliable method for consistently performing the best top model selection and for ranking the models. In addition, we show that the choice of model similarity score used to assess a model's similarity to the experimental structure can influence the overall performance of these tools. Although these MQAP methods fail to improve the model selection performance for methods that already incorporate protein three dimension (3D) structural information, an improvement is observed for methods that are purely sequence-based, including the best profile–profile methods. This suggests that even the best sequence-based fold recognition methods can still be improved by taking into account the 3D structural information.
Resumo:
The accurate prediction of the biochemical function of a protein is becoming increasingly important, given the unprecedented growth of both structural and sequence databanks. Consequently, computational methods are required to analyse such data in an automated manner to ensure genomes are annotated accurately. Protein structure prediction methods, for example, are capable of generating approximate structural models on a genome-wide scale. However, the detection of functionally important regions in such crude models, as well as structural genomics targets, remains an extremely important problem. The method described in the current study, MetSite, represents a fully automatic approach for the detection of metal-binding residue clusters applicable to protein models of moderate quality. The method involves using sequence profile information in combination with approximate structural data. Several neural network classifiers are shown to be able to distinguish metal sites from non-sites with a mean accuracy of 94.5%. The method was demonstrated to identify metal-binding sites correctly in LiveBench targets where no obvious metal-binding sequence motifs were detectable using InterPro. Accurate detection of metal sites was shown to be feasible for low-resolution predicted structures generated using mGenTHREADER where no side-chain information was available. High-scoring predictions were observed for a recently solved hypothetical protein from Haemophilus influenzae, indicating a putative metal-binding site.
Resumo:
Dhaka cheese is a semihard artisanal variety made mainly from bovine milk, using very simple pressing methods. Experimental cheeses were pressed at gauge pressures up to 31 kPa for 12 h at 24 °C and 70% RH. These cheeses were subsequently examined for their compositional, textural and rheological properties plus their microstructures investigated by confocal laser microscopy. The cheese pressed at 15.6 kPa was found to have the best compositional and structural properties.
Resumo:
The present study compares the impact of thermal and high pressure high temperature(HPHT) processing on volatile profile (via a non-targeted headspace fingerprinting) and structural and nutritional quality parameter (via targeted approaches) of orange and yellow carrot purees. The effect of oil enrichment was also considered. Since oil enrichment affects compounds volatility, the effect of oil was not studied when comparing the volatile fraction. For the targeted part, as yellow carrot purees were shown to contain a very low amount of carotenoids, focus was given to orange carrot purees. The results of the non-targeted approach demonstrated HPHT processing exerts a distinct effect on the volatile fractions compared to thermal processing. In addition, different colored carrot varieties are characterized by distinct headspace fingerprints. From a structural point of view, limited or no difference could be observed between orange carrot purees treated with HPHT or HT processes, both for samples without and with oil. From nutritional point of view, only in samples with oil, significant isomerisation of all-trans-β-carotene occurred due to both processing. Overall, for this type of product and for the selected conditions, HPHT processing seems to have a different impact on the volatile profile but rather similar impact on the structural and nutritional attributes compared to thermal processing.
Resumo:
Effect of lactic acid, SO2, temperature, and their interactions were assessed on the dynamic steeping of a Brazilian dent corn (hybrid XL 606) to determine the ideal relationship among these variables to improve the wet-milling process for starch and corn by-products production. A 2x2x3 factorial experimental design was used with SO2 levels of 0.05 and 0.1% (w/v), lactic acid levels of 0 and 0.5% (v/v), and temperatures of 52, 60, and 68degreesC. Starch yield was used as deciding factor to choose the best treatment. Lactic acid added in the steep solution improved the starch yield by an average of 5.6 percentage points. SO2 was more available to break down the structural protein network at 0.1% than at the 0.05% level. Starch-gluten separation was difficult at 68degreesC. The lactic acid and SO2 concentrations and steeping temperatures for better starch recovery were 0.5, 0.1, and 52degreesC, respectively. The Intermittent Milling and Dynamic Steeping (IMDS) process produced, on average, 1.4% more starch than the conventional 36- hr steeping process. Protein in starch, oil content in germ, and germ damage were used as quality factors. Total steep time can be reduced from 36 hr for conventional wet-milling to 8 hr for the IMDS process.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)