956 resultados para ATTRIBUTE WEIGHTING
Resumo:
Existing widely known environmental assessment models, primarily those for Life Cycle Assessment of manufactured products and buildings, were reviewed to grasp their characteristics, since the past several years have seen a significant increase in interest and research activity in the development of building environmental assessment methods. Each method or tool was assessed under the headings of description, data requirement, end-use, assessment criteria (scale of assessment and scoring/ weighting system)and present status
Resumo:
This document provides a review of international and national practices in investment decision support tools in road asset management. Efforts were concentrated on identifying analytic frameworks, evaluation methodologies and criteria adopted by current tools. Emphasis was also given to how current approaches support Triple Bottom Line decision-making. Benefit Cost Analysis and Multiple Criteria Analysis are principle methodologies in supporting decision-making in Road Asset Management. The complexity of the applications shows significant differences in international practices. There is continuing discussion amongst practitioners and researchers regarding to which one is more appropriate in supporting decision-making. It is suggested that the two approaches should be regarded as complementary instead of competitive means. Multiple Criteria Analysis may be particularly helpful in early stages of project development, say strategic planning. Benefit Cost Analysis is used most widely for project prioritisation and selecting the final project from amongst a set of alternatives. Benefit Cost Analysis approach is useful tool for investment decision-making from an economic perspective. An extension of the approach, which includes social and environmental externalities, is currently used in supporting Triple Bottom Line decision-making in the road sector. However, efforts should be given to several issues in the applications. First of all, there is a need to reach a degree of commonality on considering social and environmental externalities, which may be achieved by aggregating the best practices. At different decision-making level, the detail of consideration of the externalities should be different. It is intended to develop a generic framework to coordinate the range of existing practices. The standard framework will also be helpful in reducing double counting, which appears in some current practices. Cautions should also be given to the methods of determining the value of social and environmental externalities. A number of methods, such as market price, resource costs and Willingness to Pay, are found in the review. The use of unreasonable monetisation methods in some cases has discredited Benefit Cost Analysis in the eyes of decision makers and the public. Some social externalities, such as employment and regional economic impacts, are generally omitted in current practices. This is due to the lack of information and credible models. It may be appropriate to consider these externalities in qualitative forms in a Multiple Criteria Analysis. Consensus has been reached in considering noise and air pollution in international practices. However, Australia practices generally omitted these externalities. Equity is an important consideration in Road Asset Management. The considerations are either between regions, or social groups, such as income, age, gender, disable, etc. In current practice, there is not a well developed quantitative measure for equity issues. More research is needed to target this issue. Although Multiple Criteria Analysis has been used for decades, there is not a generally accepted framework in the choice of modelling methods and various externalities. The result is that different analysts are unlikely to reach consistent conclusions about a policy measure. In current practices, some favour using methods which are able to prioritise alternatives, such as Goal Programming, Goal Achievement Matrix, Analytic Hierarchy Process. The others just present various impacts to decision-makers to characterise the projects. Weighting and scoring system are critical in most Multiple Criteria Analysis. However, the processes of assessing weights and scores were criticised as highly arbitrary and subjective. It is essential that the process should be as transparent as possible. Obtaining weights and scores by consulting local communities is a common practice, but is likely to result in bias towards local interests. Interactive approach has the advantage in helping decision-makers elaborating their preferences. However, computation burden may result in lose of interests of decision-makers during the solution process of a large-scale problem, say a large state road network. Current practices tend to use cardinal or ordinal scales in measure in non-monetised externalities. Distorted valuations can occur where variables measured in physical units, are converted to scales. For example, decibels of noise converts to a scale of -4 to +4 with a linear transformation, the difference between 3 and 4 represents a far greater increase in discomfort to people than the increase from 0 to 1. It is suggested to assign different weights to individual score. Due to overlapped goals, the problem of double counting also appears in some of Multiple Criteria Analysis. The situation can be improved by carefully selecting and defining investment goals and criteria. Other issues, such as the treatment of time effect, incorporating risk and uncertainty, have been given scant attention in current practices. This report suggested establishing a common analytic framework to deal with these issues.
Resumo:
Principal Topic High technology consumer products such as notebooks, digital cameras and DVD players are not introduced into a vacuum. Consumer experience with related earlier generation technologies, such as PCs, film cameras and VCRs, and the installed base of these products strongly impacts the market diffusion of the new generation products. Yet technology substitution has received only sparse attention in the diffusion of innovation literature. Research for consumer durables has been dominated by studies of (first purchase) adoption (c.f. Bass 1969) which do not explicitly consider the presence of an existing product/technology. More recently, considerable attention has also been given to replacement purchases (c.f. Kamakura and Balasubramanian 1987). Only a handful of papers explicitly deal with the diffusion of technology/product substitutes (e.g. Norton and Bass, 1987: Bass and Bass, 2004). They propose diffusion-type aggregate-level sales models that are used to forecast the overall sales for successive generations. Lacking household data, these aggregate models are unable to give insights into the decisions by individual households - whether to adopt generation II, and if so, when and why. This paper makes two contributions. It is the first large-scale empirical study that collects household data for successive generations of technologies in an effort to understand the drivers of adoption. Second, in comparision to traditional analysis that evaluates technology substitution as an ''adoption of innovation'' type process, we propose that from a consumer's perspective, technology substitution combines elements of both adoption (adopting the new generation technology) and replacement (replacing the generation I product with generation II). Based on this proposition, we develop and test a number of hypotheses. Methodology/Key Propositions In some cases, successive generations are clear ''substitutes'' for the earlier generation, in that they have almost identical functionality. For example, successive generations of PCs Pentium I to II to III or flat screen TV substituting for colour TV. More commonly, however, the new technology (generation II) is a ''partial substitute'' for existing technology (generation I). For example, digital cameras substitute for film-based cameras in the sense that they perform the same core function of taking photographs. They have some additional attributes of easier copying and sharing of images. However, the attribute of image quality is inferior. In cases of partial substitution, some consumers will purchase generation II products as substitutes for their generation I product, while other consumers will purchase generation II products as additional products to be used as well as their generation I product. We propose that substitute generation II purchases combine elements of both adoption and replacement, but additional generation II purchases are solely adoption-driven process. Extensive research on innovation adoption has consistently shown consumer innovativeness is the most important consumer characteristic that drives adoption timing (Goldsmith et al. 1995; Gielens and Steenkamp 2007). Hence, we expect consumer innovativeness also to influence both additional and substitute generation II purchases. Hypothesis 1a) More innovative households will make additional generation II purchases earlier. 1 b) More innovative households will make substitute generation II purchases earlier. 1 c) Consumer innovativeness will have a stronger impact on additional generation II purchases than on substitute generation II purchases. As outlined above, substitute generation II purchases act, in part like a replacement purchase for the generation I product. Prior research (Bayus 1991; Grewal et al 2004) identified product age as the most dominant factor influencing replacements. Hence, we hypothesise that: Hypothesis 2: Households with older generation I products will make substitute generation II purchases earlier. Our survey of 8,077 households investigates their adoption of two new generation products: notebooks as a technology change to PCs, and DVD players as a technology shift from VCRs. We employ Cox hazard modelling to study factors influencing the timing of a household's adoption of generation II products. We determine whether this is an additional or substitute purchase by asking whether the generation I product is still used. A separate hazard model is conducted for additional and substitute purchases. Consumer Innovativeness is measured as domain innovativeness adapted from the scales of Goldsmith and Hofacker (1991) and Flynn et al. (1996). The age of the generation I product is calculated based on the most recent household purchase of that product. Control variables include age, size and income of household, and age and education of primary decision-maker. Results and Implications Our preliminary results confirm both our hypotheses. Consumer innovativeness has a strong influence on both additional purchases (exp = 1.11) and substitute purchases (exp = 1.09). Exp is interpreted as the increased probability of purchase for an increase of 1.0 on a 7-point innovativeness scale. Also consistent with our hypotheses, the age of the generation I product has a dramatic influence for substitute purchases of VCR/DVD (exp = 2.92) and a strong influence for PCs/notebooks (exp = 1.30). Exp is interpreted as the increased probability of purchase for an increase of 10 years in the age of the generation I product. Yet, also as hypothesised, there was no influence on additional purchases. The results lead to two key implications. First, there is a clear distinction between additional and substitute purchases of generation II products, each with different drivers. Treating these as a single process will mask the true drivers of adoption. For substitute purchases, product age is a key driver. Hence, implications for marketers of high technology products can utilise data on generation I product age (e.g. from warranty or loyalty programs) to target customers who are more likely to make a purchase.
Resumo:
In two experiments, we show how a consumer’s susceptibility to normative influence (SNI) offers useful insights into the effectiveness of two types of testimonials: a typical person endorsement (Study 1) and a celebrity endorsement (Study 2). Specifically, we suggest that two variables moderate testimonial effects—SNI and product attribute information. Results show that in forming their evaluations, high-SNI consumers place a greater emphasis on the testimonial than on the attribute information. In contrast, low-SNI consumers are more influenced by attribute information. A mediation analysis shows that advertising attitudes for high-SNI consumers are mediated by testimonial thoughts, whereas the attitudes for low-SNI consumers are mediated by their attribute thoughts. Theoretical and managerial implications are presented.
Resumo:
In this article we examine how a consumer's susceptibility to informative influence (SII) affects the effectiveness of consumer testimonials in print advertising. More specifically, we show that consumers that are high in SII and that seek consumption-relevant information from other people are more influenced by the strength of the testimonial information than the strength of the attribute information. Conversely, consumers low in SII place greater emphasis on the strength of the attribute information when forming their evaluations. Our results show that consumer psychological traits can have an important impact on the acceptance of testimonial advertising. Theoretical and managerial implications of our findings are discussed.
Resumo:
In this article we examine how consumer knowledge and two aspects of email ad design (copy type and testimonial type) influence attitudes and purchase intentions. Results from a field experiment reveal differences between experts and novices in their responses to email advertising. Specifically, experts report more favorable evaluations for email advertising than novices. Experts also demonstrate a preference for expert testimonials, when exposed to attribute copy. Yet when benefits-only ad copy was used, experts are most influenced by novice testimonials. In contrast, novice consumers show no copy-testimonial preference. Expert testimonials are also more effective than novice testimonials for expert and novice consumers. We discuss the results with respect to theoretical contributions and managerial implications.
Resumo:
Neurodegenerative disorders are heterogenous in nature and include a range of ataxias with oculomotor apraxia, which are characterised by a wide variety of neurological and ophthalmological features. This family includes recessive and dominant disorders. A subfamily of autosomal recessive cerebellar ataxias are characterised by defects in the cellular response to DNA damage. These include the well characterised disorders Ataxia-Telangiectasia (A-T) and Ataxia-Telangiectasia Like Disorder (A-TLD) as well as the recently identified diseases Spinocerebellar ataxia with axonal neuropathy Type 1 (SCAN1), Ataxia with Oculomotor Apraxia Type 2 (AOA2), as well as the subject of this thesis, Ataxia with Oculomotor Apraxia Type 1 (AOA1). AOA1 is caused by mutations in the APTX gene, which is located at chromosomal locus 9p13. This gene codes for the 342 amino acid protein Aprataxin. Mutations in APTX cause destabilization of Aprataxin, thus AOA1 is a result of Aprataxin deficiency. Aprataxin has three functional domains, an N-terminal Forkhead Associated (FHA) phosphoprotein interaction domain, a central Histidine Triad (HIT) nucleotide hydrolase domain and a C-terminal C2H2 zinc finger. Aprataxins FHA domain has homology to FHA domain of the DNA repair protein 5’ polynucleotide kinase 3’ phosphatase (PNKP). PNKP interacts with a range of DNA repair proteins via its FHA domain and plays a critical role in processing damaged DNA termini. The presence of this domain with a nucleotide hydrolase domain and a DNA binding motif implicated that Aprataxin may be involved in DNA repair and that AOA1 may be caused by a DNA repair deficit. This was substantiated by the interaction of Aprataxin with proteins involved in the repair of both single and double strand DNA breaks (XRay Cross-Complementing 1, XRCC4 and Poly-ADP Ribose Polymerase-1) and the hypersensitivity of AOA1 patient cell lines to single and double strand break inducing agents. At the commencement of this study little was known about the in vitro and in vivo properties of Aprataxin. Initially this study focused on generation of recombinant Aprataxin proteins to facilitate examination of the in vitro properties of Aprataxin. Using recombinant Aprataxin proteins I found that Aprataxin binds to double stranded DNA. Consistent with a role for Aprataxin as a DNA repair enzyme, this binding is not sequence specific. I also report that the HIT domain of Aprataxin hydrolyses adenosine derivatives and interestingly found that this activity is competitively inhibited by DNA. This provided initial evidence that DNA binds to the HIT domain of Aprataxin. The interaction of DNA with the nucleotide hydrolase domain of Aprataxin provided initial evidence that Aprataxin may be a DNA-processing factor. Following these studies, Aprataxin was found to hydrolyse 5’adenylated DNA, which can be generated by unscheduled ligation at DNA breaks with non-standard termini. I found that cell extracts from AOA1 patients do not have DNA-adenylate hydrolase activity indicating that Aprataxin is the only DNA-adenylate hydrolase in mammalian cells. I further characterised this activity by examining the contribution of the zinc finger and FHA domains to DNA-adenylate hydrolysis by the HIT domain. I found that deletion of the zinc finger ablated the activity of the HIT domain against adenylated DNA, indicating that the zinc finger may be required for the formation of a stable enzyme-substrate complex. Deletion of the FHA domain stimulated DNA-adenylate hydrolysis, which indicated that the activity of the HIT domain may be regulated by the FHA domain. Given that the FHA domain is involved in protein-protein interactions I propose that the activity of Aprataxins HIT domain may be regulated by proteins which interact with its FHA domain. We examined this possibility by measuring the DNA-adenylate hydrolase activity of extracts from cells deficient for the Aprataxin-interacting DNA repair proteins XRCC1 and PARP-1. XRCC1 deficiency did not affect Aprataxin activity but I found that Aprataxin is destabilized in the absence of PARP-1, resulting in a deficiency of DNA-adenylate hydrolase activity in PARP-1 knockout cells. This implies a critical role for PARP-1 in the stabilization of Aprataxin. Conversely I found that PARP-1 is destabilized in the absence of Aprataxin. PARP-1 is a central player in a number of DNA repair mechanisms and this implies that not only do AOA1 cells lack Aprataxin, they may also have defects in PARP-1 dependant cellular functions. Based on this I identified a defect in a PARP-1 dependant DNA repair mechanism in AOA1 cells. Additionally, I identified elevated levels of oxidized DNA in AOA1 cells, which is indicative of a defect in Base Excision Repair (BER). I attribute this to the reduced level of the BER protein Apurinic Endonuclease 1 (APE1) I identified in Aprataxin deficient cells. This study has identified and characterised multiple DNA repair defects in AOA1 cells, indicating that Aprataxin deficiency has far-reaching cellular consequences. Consistent with the literature, I show that Aprataxin is a nuclear protein with nucleoplasmic and nucleolar distribution. Previous studies have shown that Aprataxin interacts with the nucleolar rRNA processing factor nucleolin and that AOA1 cells appear to have a mild defect in rRNA synthesis. Given the nucleolar localization of Aprataxin I examined the protein-protein interactions of Aprataxin and found that Aprataxin interacts with a number of rRNA transcription and processing factors. Based on this and the nucleolar localization of Aprataxin I proposed that Aprataxin may have an alternative role in the nucleolus. I therefore examined the transcriptional activity of Aprataxin deficient cells using nucleotide analogue incorporation. I found that AOA1 cells do not display a defect in basal levels of RNA synthesis, however they display defective transcriptional responses to DNA damage. In summary, this thesis demonstrates that Aprataxin is a DNA repair enzyme responsible for the repair of adenylated DNA termini and that it is required for stabilization of at least two other DNA repair proteins. Thus not only do AOA1 cells have no Aprataxin protein or activity, they have additional deficiencies in PolyADP Ribose Polymerase-1 and Apurinic Endonuclease 1 dependant DNA repair mechanisms. I additionally demonstrate DNA-damage inducible transcriptional defects in AOA1 cells, indicating that Aprataxin deficiency confers a broad range of cellular defects and highlighting the complexity of the cellular response to DNA damage and the multiple defects which result from Aprataxin deficiency. My detailed characterization of the cellular consequences of Aprataxin deficiency provides an important contribution to our understanding of interlinking DNA repair processes.
Resumo:
Research has noted a ‘pronounced pattern of increase with increasing remoteness' of death rates in road crashes. However, crash characteristics by remoteness are not commonly or consistently reported, with definitions of rural and urban often relying on proxy representations such as prevailing speed limit. The current paper seeks to evaluate the efficacy of the Accessibility / Remoteness Index of Australia (ARIA+) to identifying trends in road crashes. ARIA+ does not rely on road-specific measures and uses distances to populated centres to attribute a score to an area, which can in turn be grouped into 5 classifications of increasing remoteness. The current paper uses applications of these classifications at the broad level of Australian Bureau of Statistics' Statistical Local Areas, thus avoiding precise crash locating or dedicated mapping software. Analyses used Queensland road crash database details for all 31,346 crashes resulting in a fatality or hospitalisation occurring between 1st July, 2001 and 30th June 2006 inclusive. Results showed that this simplified application of ARIA+ aligned with previous definitions such as speed limit, while also providing further delineation. Differences in crash contributing factors were noted with increasing remoteness such as a greater representation of alcohol and ‘excessive speed for circumstances.' Other factors such as the predominance of younger drivers in crashes differed little by remoteness classification. The results are discussed in terms of the utility of remoteness as a graduated rather than binary (rural/urban) construct and the potential for combining ARIA crash data with census and hospital datasets.
Resumo:
Illegal pedestrian behaviour is common and is reported as a factor in many pedestrian crashes. Since walking is being promoted for its health and environmental benefits, minimisation of its associated risks is of interest. The risk associated with illegal road crossing is unclear, and better information would assist in setting a rationale for enforcement and priorities for public education. An observation survey of pedestrian behaviour was conducted at signalised intersections in the Brisbane CBD (Queensland, Australia) on typical workdays, using behavioural categories that were identifiable in police crash reports. The survey confirmed high levels of crossing against the lights, or close enough to the lights that they should legally have been used. Measures of exposure for crossing legally, against the lights, and close to the lights were generated by weighting the observation data. Relative risk ratios were calculated for these categories using crash data from the observation sites and adjacent midblocks. Crossing against the lights and crossing close to the lights both exhibited a crash risk per crossing event approximately eight times that of legal crossing at signalised intersections. The implications of these results for enforcement and education are discussed, along with the limitations of the study.
Resumo:
Context The School of Information Technology at QUT has recently undertaken a major restructuring of their Bachelor of Information Technology (BIT) course. Some of the aims of this restructuring include a reduction in first year attrition and to provide an attractive degree course that meets both student and industry expectations. Emphasis has been placed on the first semester in the context of retaining students by introducing a set of four units that complement one another and provide introductory material on technology, programming and related skills, and generic skills that will aid the students throughout their undergraduate course and in their careers. This discussion relates to one of these four fist semester units, namely Building IT Systems. The aim of this unit is to create small Information Technology (IT) systems that use programming or scripting, databases as either standalone applications or web applications. In the prior history of teaching introductory computer programming at QUT, programming has been taught as a stand alone subject and integration of computer applications with other systems such as databases and networks was not undertaken until students had been given a thorough grounding in those topics as well. Feedback has indicated that students do not believe that working with a database requires programming skills. In fact, the teaching of the building blocks of computer applications have been compartmentalized and taught in isolation from each other. The teaching of introductory computer programming has been an industry requirement of IT degree courses as many jobs require at least some knowledge of the topic. Yet, computer programming is not a skill that all students have equal capabilities of learning (Bruce et al., 2004) and this is clearly shown by the volume of publications dedicated to this topic in the literature over a broad period of time (Eckerdal & Berglund, 2005; Mayer, 1981; Winslow, 1996). The teaching of this introductory material has been done pretty much the same way over the past thirty years. During this period of time that introductory computer programming courses have been taught at QUT, a number of different programming languages and programming paradigms have been used and different approaches to teaching and learning have been attempted in an effort to find the golden thread that would allow students to learn this complex topic. Unfortunately, computer programming is not a skill that can be learnt in one semester. Some basics can be learnt but it can take many years to master (Norvig, 2001). Faculty data typically has shown a bimodal distribution of results for students undertaking introductory programming courses with a high proportion of students receiving a high mark and a high proportion of students receiving a low or failing mark. This indicates that there are students who understand and excel with the introductory material while there is another group who struggle to understand the concepts and practices required to be able to translate a specification or problem statement into a computer program that achieves what is being requested. The consequence of a large group of students failing the introductory programming course has been a high level of attrition amongst first year students. This attrition level does not provide good continuity in student numbers in later years of the degree program and the current approach is not seen as sustainable.
Resumo:
This article explores new the realities of the permissions culture and “all rights reserved copyright” in the networked environment and poses the question: why is lending a copy of a book sharing but emailing a PDF of it piracy? It explores new approaches to publishing and distribution of books by highlighting two books in the Aduki Independent Press catalogue. It was modeled on a presentation delivered by Elliott Bledsoe at the Changing Climates in Arts Publishing forum run by Artlink and the Copyright Agency Limited in Adelaide, Australia on 9 May 2009 and in Sydney, Australia on 27 June 2009.
Resumo:
This PhD study examines some of what happens in an individual’s mind regarding creativity during problem solving within an organisational context. It presents innovations related to creative motivation, cognitive style and framing effects that can be applied by managers to enhance individual employee creativity within the organisation and thereby assist organisations to become more innovative. The project delivers an understanding of how to leverage natural changes in creative motivation levels during problem solving. This pattern of response is called Creative Resolve Response (CRR). The project also presents evidence of how framing effects can be used to influence decisions involving creative options in order to enhance the potential for managers get employees to select creative options more often for implementation. The study’s objectives are to understand: • How creative motivation changes during problem solving • How cognitive style moderates these creative motivation changes • How framing effects apply to decisions involving creative options to solve problems • How cognitive style moderate these framing effects The thesis presents the findings from three controlled experiments based around self reports during contrived problem solving and decision making situations. The first experiment suggests that creative motivation varies in a predictable and systematic way during problem solving as a function of the problem solver’s perception of progress. The second experiment suggests that there are specific framing effects related to decisions involving creativity. It seems that simply describing an alternative as innovative may activate perceptual biases that overcome risk based framing effects. The third experiment suggests that cognitive style moderates decisions involving creativity in complex ways. It seems that in some contexts, decision makers will prefer a creative option, regardless of their cognitive style, if this option is both outside the bounds of what is officially allowed and yet ultimately safe. The thesis delivers innovation on three levels: theoretical, methodological and empirical. The highlights of these findings are outlined below: 1. Theoretical innovation with the conceptualisation of Creative Resolve Response based on an extension of Amabile’s research regarding creative motivation. 2. Theoretical innovation linking creative motivation and Kirton’s research on cognitive style. 3. Theoretical innovation linking both risk based and attribute framing effects to cognitive style. 4. Methodological innovation for defining and testing preferences for creative solution implementation in the form of operationalised creativity decision alternatives. 5. Methodological innovation to identify extreme decision options by applying Shafir’s findings regarding attribute framing effects in reverse to create a test. 6. Empirical innovation with statistically significant research findings which indicate creative motivation varies in a systematic way. 7. Empirical innovation with statistically significant research findings which identify innovation descriptor framing effects 8. Empirical innovation with statistically significant research findings which expand understanding of Kirton’s cognitive style descriptors including the importance of safe rule breaking. 9. Empirical innovation with statistically significant research findings which validate how framing effects do apply to decisions involving operationalised creativity. Drawing on previous research related to creative motivation, cognitive style, framing effects and supervisor interactions with employees, this study delivers insights which can assist managers to increase the production and implementation of creativity in organisations. Hopefully this will result in organisations which are more innovative. Such organisations have the potential to provide ongoing economic and social benefits.
Resumo:
"For every complex problem there is a solution that is simple, neat and wrong (M.L. Mencken, US writer and social commentator). Nowhere is this quote more apt than when applied to finding over-simplified solutions to the complex problem of looking after the safety and well-being of vulnerable children. The easiest formula is, of course, to ‘rescue children from dysfunctional families’, a line taken recently in the monograph by the right wing think tank, Centre for Independent Studies (Sammut & O’Brien 2009). It is reasoning with fatal flaws. This commentary provides a timely reminder of the strong arguments which lie behind the national and international shift to supporting children and families through universal and specialist community-based services, rather than weighting all resources into statutory child protection interventions. A brief outline of the value of developing the resources to support children in their families, and the problems with 'rescuing' children through the child protection system are discussed.
Resumo:
Purpose –The introduction of Building Information Model tools over the last 20 years is resulting in radical changes in the Architectural, Engineering and Construction industry. One of these changes concerns the use of Virtual Prototyping - an advanced technology integrating BIM with realistic graphical simulations. Construction Virtual Prototyping (CVP) has now been developed and implemented on ten real construction projects in Hong Kong in the past three years. This paper reports on a survey aimed at establishing the effects of adopting this new technology and obtaining recommendations for future development. Design/methodology/approach – A questionnaire survey was conducted in 2007 of 28 key participants involved in four major Hong Kong construction projects – these projects being chosen because the CVP approach was used in more than one stage in each project. In addition, several interviews were conducted with the project manager, planning manager and project engineer of an individual project. Findings –All the respondents and interviewees gave a positive response to the CVP approach, with the most useful software functions considered to be those relating to visualisation and communication. The CVP approach was thought to improve the collaboration efficiency of the main contractor and sub-contractors by approximately 30 percent, and with a concomitant 30 to 50 percent reduction in meeting time. The most important benefits of CPV in the construction planning stage are the improved accuracy of process planning and shorter planning times, while improved fieldwork instruction and reducing rework occur in the construction implementation stage. Although project teams are hesitant to attribute the use of CVP directly to any specific time savings, it was also acknowledged that the workload of project planners is decreased. Suggestions for further development of the approach include incorporation of automatic scheduling and advanced assembly study. Originality/value –Whilst the research, development and implementation of CVP is relatively new in the construction industry, it is clear from the applications and feedback to date that the approach provides considerable added value to the organisation and management of construction projects.
Resumo:
This paper presents Scatter Difference Nuisance Attribute Projection (SD-NAP) as an enhancement to NAP for SVM-based speaker verification. While standard NAP may inadvertently remove desirable speaker variability, SD-NAP explicitly de-emphasises this variability by incorporating a weighted version of the between-class scatter into the NAP optimisation criterion. Experimental evaluation of SD-NAP with a variety of SVM systems on the 2006 and 2008 NIST SRE corpora demonstrate that SD-NAP provides improved verification performance over standard NAP in most cases, particularly at the EER operating point.