890 resultados para Literature and experience
Resumo:
Aside from cracks, the impact of other surface defects, such as air pockets and discoloration, can be detrimental to the quality of concrete in terms of strength, appearance and durability. For this reason, local and national codes provide standards for quantifying the quality impact of these concrete surface defects and owners plan for regular visual inspections to monitor surface conditions. However, manual visual inspection of concrete surfaces is a qualitative (and subjective) process with often unreliable results due to its reliance on inspectors’ own criteria and experience. Also, it is labor intensive and time-consuming. This paper presents a novel, automated concrete surface defects detection and assessment approach that addresses these issues by automatically quantifying the extent of surface deterioration. According to this approach, images of the surface shot from a certain angle/distance can be used to automatically detect the number and size of surface air pockets, and the degree of surface discoloration. The proposed method uses histogram equalization and filtering to extract such defects and identify their properties (e.g. size, shape, location). These properties are used to quantify the degree of impact on the concrete surface quality and provide a numerical tool to help inspectors accurately evaluate concrete surfaces. The method has been implemented in C++ and results that validate its performance are presented.
Resumo:
People are alarmingly susceptible to manipulations that change both their expectations and experience of the value of goods. Recent studies in behavioral economics suggest such variability reflects more than mere caprice. People commonly judge options and prices in relative terms, rather than absolutely, and display strong sensitivity to exemplar and price anchors. We propose that these findings elucidate important principles about reward processing in the brain. In particular, relative valuation may be a natural consequence of adaptive coding of neuronal firing to optimise sensitivity across large ranges of value. Furthermore, the initial apparent arbitrariness of value may reflect the brains' attempts to optimally integrate diverse sources of value-relevant information in the face of perceived uncertainty. Recent findings in neuroscience support both accounts, and implicate regions in the orbitofrontal cortex, striatum, and ventromedial prefrontal cortex in the construction of value.
Resumo:
Confronted with high variety and low volume market demands, many companies, especially the Japanese electronics manufacturing companies, have reconfigured their conveyor assembly lines and adopted seru production systems. Seru production system is a new type of work-cell-based manufacturing system. A lot of successful practices and experience show that seru production system can gain considerable flexibility of job shop and high efficiency of conveyor assembly line. In implementing seru production, the multi-skilled worker is the most important precondition, and some issues about multi-skilled workers are central and foremost. In this paper, we investigate the training and assignment problem of workers when a conveyor assembly line is entirely reconfigured into several serus. We formulate a mathematical model with double objectives which aim to minimize the total training cost and to balance the total processing times among multi-skilled workers in each seru. To obtain the satisfied task-to-worker training plan and worker-to-seru assignment plan, a three-stage heuristic algorithm with nine steps is developed to solve this mathematical model. Then, several computational cases are taken and computed by MATLAB programming. The computation and analysis results validate the performances of the proposed mathematical model and heuristic algorithm. © 2013 Springer-Verlag London.
Resumo:
We point out the use of a wrong definition for conversion efficiency in the literature and analyze the effects of the waveguide length and pump power on conversion efficiency according to the correct definition. The existence of the locally optimal waveguide length and pump power is demonstrated theoretically and experimentally. Further analysis shows that the extremum of conversion efficiency can be achieved by global optimization of the waveguide length and pump power simultaneously, which is limited by just the linear propagation loss and the effective carrier lifetime. (C) 2009 Optical Society of America
Resumo:
Radiation crosslinking of polymers mainly depends on the structure of polymer chain. The flexibility and mobility of chain directly influence the possibility of the reactive radicals recombination. Flexible chain is easier to crosslink than rigid-chain polymer. The latter must be crosslinked at high temperature, as most polymers can only crosslink above their melting point. Structural effect also influences the mechanism of radiation crosslinking of polymers. We find from the results in literature and in our laboratory that, the flexibility chain polymer mainly crosslinked with H type, but the rigid chain polymer mainly crosslinked with Y type. (C) 2001 Published by Elsevier Science Ltd.
Resumo:
In order to avoid the hygroscopicity of LiCl specimem, the method of directly chlorinating Li_2CO_3 with NH_4Cl was successfully introducing into the thermal analysis of the system containing LiCl. The three fusibility diagram of LiCl-KCl, LiClNaCl, LiCl-LiF were determined using the method. The results are in agreement wish the values reported in the literature, and phase diagram of LiCl-KCl-LiF ternary system was constructed based on these results. Temperature of the ternary eutectic, composed of 57.3mol%...
Resumo:
Timing data is infrequently reported in aphasiological literature and time taken is only a minor factor, where it is considered at all, in existing aphasia assessments. This is not surprising because reaction times are difficult to obtain manually, but it is a pity, because speed data should be indispensable in assessing the severity of language processing disorders and in evaluating the effects of treatment. This paper argues that reporting accuracy data without discussing speed of performance gives an incomplete and potentially misleading picture of any cognitive function. Moreover, in deciding how to treat, when to continue treatment and when to cease therapy, clinicians should have regard to both parameters: Speed and accuracy of performance. Crerar, Ellis and Dean (1996) reported a study in which the written sentence comprehension of 14 long-term agrammatic subjects was assessed and treated using a computer-based microworld. Some statistically significant and durable treatment effects were obtained after a short amount of focused therapy. Only accuracy data were reported in that (already long) paper, and interestingly, although it has been a widely read study, neither referees nor subsequent readers seemed to miss "the other side of the coin": How these participants compared with controls for their speed of processing and what effect treatment had on speed. This paper considers both aspects of the data and presents a tentative way of combining treatment effects on both accuracy and speed of performance in a single indicator. Looking at rehabilitation this way gives us a rather different perspective on which individuals benefited most from the intervention. It also demonstrates that while some subjects are capable of utilising metalinguistic skills to achieve normal accuracy scores even many years post-stroke, there is little prospect of reducing the time taken to within the normal range. Without considering speed of processing, the extent of this residual functional impairment can be overlooked.
Resumo:
This paper reprots on the use of banchmarking to improve the links between business and operations strategies. The use of benchmarking as a toll to facilitate improvements in these crucial links is examined. The existing literature on process benchmarking is used to from a structured questionnaire to apply to six case studies of major manuifacturing companies. Four of these case studies are presented in this paper to highlight the use of benchmarking in this application. Initial researh results are presented drawing upon the critical success factors indentified both in the literature and on the case results. Recommendations for further work are outlined
Resumo:
This paper reports on the use of benchmarking to improve the links between business and operations strategies. The use of benchmarking as a tool to facilitate improvement in these crucial links is examined. The existing literature on process benchmarking is used to form a structured questionnaire to apply to six case studies of major maunfacturing companies. Four of these case studies are presented drawing upon the critical success factors identified both in the literature and on the case results. Recommendations for further work are outlined.
Resumo:
Marggraf Turley, R. (2002). The Politics of Language in Romantic Literature. Basingstoke: Palgrave Macmillan. RAE2008
Resumo:
Shears, J. (2006). Approaching the Unapproached Light: Milton and the Romantic Visionary. In G. Hopps and J. Stabler (Eds.), Romanticism and Religion from William Cowper to Wallace Stevens (pp.25-40). The Nineteenth Century Series. Aldershot: Ashgate. RAE2008
Resumo:
Grovier, Kelly, 'Keats and the Holocaust: Notes towards a post-temporalism', Literature and Theology (2003) 17 (4) pp.361-373 RAE2008
Resumo:
Williams, Gruffydd. 'The literary tradition to c. 1560', In: History of Merioneth, Vol. II: The Middle Ages (University of Wales Press, 2001), pp.507-628 RAE2008
Resumo:
RAE2008
Resumo:
In the first part of this paper we reviewed the fingerprint classification literature from two different perspectives: the feature extraction and the classifier learning. Aiming at answering the question of which among the reviewed methods would perform better in a real implementation we end up in a discussion which showed the difficulty in answering this question. No previous comparison exists in the literature and comparisons among papers are done with different experimental frameworks. Moreover, the difficulty in implementing published methods was stated due to the lack of details in their description, parameters and the fact that no source code is shared. For this reason, in this paper we will go through a deep experimental study following the proposed double perspective. In order to do so, we have carefully implemented some of the most relevant feature extraction methods according to the explanations found in the corresponding papers and we have tested their performance with different classifiers, including those specific proposals made by the authors. Our aim is to develop an objective experimental study in a common framework, which has not been done before and which can serve as a baseline for future works on the topic. This way, we will not only test their quality, but their reusability by other researchers and will be able to indicate which proposals could be considered for future developments. Furthermore, we will show that combining different feature extraction models in an ensemble can lead to a superior performance, significantly increasing the results obtained by individual models.