384 resultados para critical path methods
Resumo:
The aim of this study was to characterise and quantify the fungal fragment propagules derived and released from several fungal species (Penicillium, Aspergillus niger and Cladosporium cladosporioides) using different generation methods and different air velocities over the colonies. Real time fungal spore fragmentation was investigated using an Ultraviolet Aerodynamic Particle Sizer (UVASP) and a Scanning Mobility Particle Sizer (SMPS). The study showed that there were significant differences (p < 0.01) in the fragmentation percentage between different air velocities for the three generation methods, namely the direct, the fan and the fungal spore source strength tester (FSSST) methods. The percentage of fragmentation also proved to be dependant on fungal species. The study found that there was no fragmentation for any of the fungal species at an air velocity ≤ 0.4 m/s for any method of generation. Fluorescent signals, as well as mathematical determination also showed that the fungal fragments were derived from spores. Correlation analysis showed that the number of released fragments measured by the UVAPS under controlled conditions can be predicted on the basis of the number of spores, for Penicillium and Aspergillus niger, but not for Cladosporium cladosporioides. The fluorescence percentage of fragment samples was found to be significantly different to that of non-fragment samples (p < 0.0001) and the fragment sample fluorescence was always less than that of the non-fragment samples. Size distribution and concentration of fungal fragment particles were investigated qualitatively and quantitatively, by both UVAPS and SMPS, and it was found that the UVAPS was more sensitive than the SMPS for measuring small sample concentrations, and the results obtained from the UVAPS and SMAS were not identical for the same samples.
Resumo:
Participatory evaluation and participatory action research (PAR) are increasingly used in community-based programs and initiatives and there is a growing acknowledgement of their value. These methodologies focus more on knowledge generated and constructed through lived experience than through social science (Vanderplaat 1995). The scientific ideal of objectivity is usually rejected in favour of a holistic approach that acknowledges and takes into account the diverse perspectives, values and interpretations of participants and evaluation professionals. However, evaluation rigour need not be lost in this approach. Increasing the rigour and trustworthiness of participatory evaluations and PAR increases the likelihood that results are seen as credible and are used to continually improve programs and policies.----- Drawing on learnings and critical reflections about the use of feminist and participatory forms of evaluation and PAR over a 10-year period, significant sources of rigour identified include:----- • participation and communication methods that develop relations of mutual trust and open communication----- • using multiple theories and methodologies, multiple sources of data, and multiple methods of data collection----- • ongoing meta-evaluation and critical reflection----- • critically assessing the intended and unintended impacts of evaluations, using relevant theoretical models----- • using rigorous data analysis and reporting processes----- • participant reviews of evaluation case studies, impact assessments and reports.
Resumo:
There is a mismatch between the kinds of movements used in gesture interfaces and our existing theoretical understandings of gesture. We need to re-examine the assumptions of gesture research and develop theory more suited to gesture interface design. In addition to improved theory, we need to develop ways for participants in the process of design to adapt, extend and develop theory for their own design contexts. Gesture interface designers should approach theory as a contingent resource for design actions that is responsive to the needs of the design process.
Resumo:
The finite element and boundary element methods are employed in this study to investigate the sound radiation characteristics of a box-type structure. It has been shown [T.R. Lin, J. Pan, Vibration characteristics of a box-type structure, Journal of Vibration and Acoustics, Transactions of ASME 131 (2009) 031004-1–031004-9] that modes of natural vibration of a box-type structure can be classified into six groups according to the symmetry properties of the three panel pairs forming the box. In this paper, we demonstrate that such properties also reveal information about sound radiation effectiveness of each group of modes. The changes of radiation efficiencies and directivity patterns with the wavenumber ratio (the ratio between the acoustic and the plate bending wavenumbers) are examined for typical modes from each group. Similar characteristics of modal radiation efficiencies between a box structure and a corresponding simply supported panel are observed. The change of sound radiation patterns as a function of the wavenumber ratio is also illustrated. It is found that the sound radiation directivity of each box mode can be correlated to that of elementary sound sources (monopole, dipole, etc.) at frequencies well below the critical frequency of the plates of the box. The sound radiation pattern on the box surface also closely related to the vibration amplitude distribution of the box structure at frequencies above the critical frequency. In the medium frequency range, the radiated sound field is dominated by the edge vibration pattern of the box. The radiation efficiency of all box modes reaches a peak at frequencies above the critical frequency, and gradually approaches unity at higher frequencies.
Resumo:
In Australia, advertising is a $13 billion industry which needs a supply of suitably skilled employees. Over the years, advertising education has developed from vocational based courses to degree courses across the country. This paper uses diffusion theory and various secondary sources and interviews to observe the development of advertising education in Australia from its early past, to its current day tertiary offerings, to discussing the issues that are arising in the near future. Six critical issues are identified, along with observations about the challenges and opportunities within Australia advertising education. By looking back to the future, it is hoped that this historical review provides lessons for other countries of similar educational structure or background, or even other marketing communication disciplines on a similar evolutionary path.
Resumo:
The Chaser’s War on Everything is a night time entertainment program which screened on Australia’s public broadcaster, the ABC in 2006 and 2007. This enormously successful comedy show managed to generate a lot of controversy in its short lifespan (see, for example, Dennehy, 2007; Dubecki, 2007; McLean, 2007; Wright, 2007), but also drew much praise for its satirising of, and commentary on, topical issues. Through interviews with the program’s producers, qualitative audience research and textual analysis, this paper will focus on this show’s media satire, and the segment ‘What Have We Learned From Current Affairs This Week?’ in particular. Viewed as a form of ‘Critical Intertextuality’ (Gray, 2006), this segment (which offered a humorous critique of the ways in which news and current affairs are presented elsewhere on television) may equip citizens with a better understanding of the new genre’s production methods, thus producing a higher level of public media literacy. This paper argues that through its media satire, The Chaser acts not as a traditional news program would in informing the public with new information, but as a text which can inform and shape our understanding of news that already exists within the public sphere. Humorous analyses and critiques of the media (like those analysed in this paper), are in fact very important forms of infotainment, because they can provide “other, ‘improper,’ and yet more media literate and savvy interpretations” (Gray, 2006, p. 4) of the news.
Resumo:
With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.
Resumo:
In this article I outline and demonstrate a synthesis of the methods developed by Lemke (1998) and Martin (2000) for analyzing evaluations in English. I demonstrate the synthesis using examples from a 1.3-million-word technology policy corpus drawn from institutions at the local, state, national, and supranational levels. Lemke's (1998) critical model is organized around the broad 'evaluative dimensions' that are deployed to evaluate propositions and proposals in English. Martin's (2000) model is organized with a more overtly systemic-functional orientation around the concept of 'encoded feeling'. In applying both these models at different times, whilst recognizing their individual usefulness and complementarity, I found specific limitations that led me to work towards a synthesis of the two approaches. I also argue for the need to consider genre, media, and institutional aspects more explicitly when claiming intertextual and heteroglossic relations as the basis for inferred evaluations. A basic assertion made in this article is that the perceived Desirability of a process, person, circumstance, or thing is identical to its 'value'. But the Desirability of anything is a socially and thus historically conditioned attribution that requires significant amounts of institutional inculcation of other 'types' of value-appropriateness, importance, beauty, power, and so on. I therefore propose a method informed by critical discourse analysis (CDA) that sees evaluation as happening on at least four interdependent levels of abstraction.
Resumo:
The availability of innumerable intelligent building (IB) products, and the current dearth of inclusive building component selection methods suggest that decision makers might be confronted with the quandary of forming a particular combination of components to suit the needs of a specific IB project. Despite this problem, few empirical studies have so far been undertaken to analyse the selection of the IB systems, and to identify key selection criteria for major IB systems. This study is designed to fill these research gaps. Two surveys: a general survey and the analytic hierarchy process (AHP) survey are proposed to achieve these objectives. The first general survey aims to collect general views from IB experts and practitioners to identify the perceived critical selection criteria, while the AHP survey was conducted to prioritize and assign the important weightings for the perceived criteria in the general survey. Results generally suggest that each IB system was determined by a disparate set of selection criteria with different weightings. ‘Work efficiency’ is perceived to be most important core selection criterion for various IB systems, while ‘user comfort’, ‘safety’ and ‘cost effectiveness’ are also considered to be significant. Two sub-criteria, ‘reliability’ and ‘operating and maintenance costs’, are regarded as prime factors to be considered in selecting IB systems. The current study contributes to the industry and IB research in at least two aspects. First, it widens the understanding of the selection criteria, as well as their degree of importance, of the IB systems. It also adopts a multi-criteria AHP approach which is a new method to analyse and select the building systems in IB. Further research would investigate the inter-relationship amongst the selection criteria.
Negotiating multiple identities between school and the outside world : A critical discourse analysis
Resumo:
This article examines interview talk of three students in an Australian high school to show how they negotiate their young adult identities between school and the outside world. It draws on Bakhtin’s concepts of dialogism and heteroglossia to argue that identities are linguistically and corporeally constituted. A critical discourse analysis of segments of transcribed interviews and student-related public documents finds a mismatch between a social justice curriculum at school and its transfer into students’ accounts of outside school lived realities. The article concludes that a productive social justice pedagogy must use its key principles of (con)textual interrogation to engage students in reflexive practice about their positioning within and against discourses of social justice in their student and civic lives. An impending national curriculum must decide whether or not it negotiates the discursive divide any better.
Resumo:
Aim: In the current climate of medical education, there is an ever-increasing demand for and emphasis on simulation as both a teaching and training tool. The objective of our study was to compare the realism and practicality of a number of artificial blood products that could be used for high-fidelity simulation. Method: A literature and internet search was performed and 15 artificial blood products were identified from a variety of sources. One product was excluded due to its potential toxicity risks. Five observers, blinded to the products, performed two assessments on each product using an evaluation tool with 14 predefined criteria including color, consistency, clotting, and staining potential to manikin skin and clothing. Each criterion was rated using a five-point Likert scale. The products were left for 24 hours, both refrigerated and at room temperature, and then reassessed. Statistical analysis was performed to identify the most suitable products, and both inter- and intra-rater variability were examined. Results: Three products scored consistently well with all five assessors, with one product in particular scoring well in almost every criterion. This highest-rated product had a mean rating of 3.6 of 5.0 (95% posterior Interval 3.4-3.7). Inter-rater variability was minor with average ratings varying from 3.0 to 3.4 between the highest and lowest scorer. Intrarater variability was negligible with good agreement between first and second rating as per weighted kappa scores (K = 0.67). Conclusion: The most realistic and practical form of artificial blood identified was a commercial product called KD151 Flowing Blood Syrup. It was found to be not only realistic in appearance but practical in terms of storage and stain removal.
Resumo:
A plethora of methods for procuring building projects are available to meet the needs of clients. Deciding what method to use for a given project is a difficult and challenging task as a client’s objectives and priorities need to marry with the selected method so as to improve the likelihood of the project being procured successfully. The decision as to what procurement system to use should be made as early as possible and underpinned by the client’s business case for the project. The risks and how they can potentially affect the client’s business should also be considered. In this report, the need for client’s to develop a procurement strategy, which outlines the key means by which the objectives of the project are to be achieved is emphasised. Once a client has established a business case for a project, appointed a principal advisor, determined their requirements and brief, then consideration as to which procurement method to be adopted should be made. An understanding of the characteristics of various procurement options is required before a recommendation can be made to a client. Procurement systems can be categorised as traditional, design and construct, management and collaborative. The characteristics of these systems along with the procurement methods commonly used are described. The main advantages and disadvantages, and circumstances under which a system could be considered applicable for a given project are also identified.
Resumo:
This document provides a review of international and national practices in investment decision support tools in road asset management. Efforts were concentrated on identifying analytic frameworks, evaluation methodologies and criteria adopted by current tools. Emphasis was also given to how current approaches support Triple Bottom Line decision-making. Benefit Cost Analysis and Multiple Criteria Analysis are principle methodologies in supporting decision-making in Road Asset Management. The complexity of the applications shows significant differences in international practices. There is continuing discussion amongst practitioners and researchers regarding to which one is more appropriate in supporting decision-making. It is suggested that the two approaches should be regarded as complementary instead of competitive means. Multiple Criteria Analysis may be particularly helpful in early stages of project development, say strategic planning. Benefit Cost Analysis is used most widely for project prioritisation and selecting the final project from amongst a set of alternatives. Benefit Cost Analysis approach is useful tool for investment decision-making from an economic perspective. An extension of the approach, which includes social and environmental externalities, is currently used in supporting Triple Bottom Line decision-making in the road sector. However, efforts should be given to several issues in the applications. First of all, there is a need to reach a degree of commonality on considering social and environmental externalities, which may be achieved by aggregating the best practices. At different decision-making level, the detail of consideration of the externalities should be different. It is intended to develop a generic framework to coordinate the range of existing practices. The standard framework will also be helpful in reducing double counting, which appears in some current practices. Cautions should also be given to the methods of determining the value of social and environmental externalities. A number of methods, such as market price, resource costs and Willingness to Pay, are found in the review. The use of unreasonable monetisation methods in some cases has discredited Benefit Cost Analysis in the eyes of decision makers and the public. Some social externalities, such as employment and regional economic impacts, are generally omitted in current practices. This is due to the lack of information and credible models. It may be appropriate to consider these externalities in qualitative forms in a Multiple Criteria Analysis. Consensus has been reached in considering noise and air pollution in international practices. However, Australia practices generally omitted these externalities. Equity is an important consideration in Road Asset Management. The considerations are either between regions, or social groups, such as income, age, gender, disable, etc. In current practice, there is not a well developed quantitative measure for equity issues. More research is needed to target this issue. Although Multiple Criteria Analysis has been used for decades, there is not a generally accepted framework in the choice of modelling methods and various externalities. The result is that different analysts are unlikely to reach consistent conclusions about a policy measure. In current practices, some favour using methods which are able to prioritise alternatives, such as Goal Programming, Goal Achievement Matrix, Analytic Hierarchy Process. The others just present various impacts to decision-makers to characterise the projects. Weighting and scoring system are critical in most Multiple Criteria Analysis. However, the processes of assessing weights and scores were criticised as highly arbitrary and subjective. It is essential that the process should be as transparent as possible. Obtaining weights and scores by consulting local communities is a common practice, but is likely to result in bias towards local interests. Interactive approach has the advantage in helping decision-makers elaborating their preferences. However, computation burden may result in lose of interests of decision-makers during the solution process of a large-scale problem, say a large state road network. Current practices tend to use cardinal or ordinal scales in measure in non-monetised externalities. Distorted valuations can occur where variables measured in physical units, are converted to scales. For example, decibels of noise converts to a scale of -4 to +4 with a linear transformation, the difference between 3 and 4 represents a far greater increase in discomfort to people than the increase from 0 to 1. It is suggested to assign different weights to individual score. Due to overlapped goals, the problem of double counting also appears in some of Multiple Criteria Analysis. The situation can be improved by carefully selecting and defining investment goals and criteria. Other issues, such as the treatment of time effect, incorporating risk and uncertainty, have been given scant attention in current practices. This report suggested establishing a common analytic framework to deal with these issues.
Resumo:
Principal Topic A small firm is unlikely to possess internally the full range of knowledge and skills that it requires or could benefit from for the development of its business. The ability to acquire suitable external expertise - defined as knowledge or competence that is rare in the firm and acquired from the outside - when needed thus becomes a competitive factor in itself. Access to external expertise enables the firm to focus on its core competencies and removes the necessity to internalize every skill and competence. However, research on how small firms access external expertise is still scarce. The present study contributes to this under-developed discussion by analysing the role of trust and strong ties in the small firm's selection and evaluation of sources of external expertise (henceforth referred to as the 'business advisor' or 'advisor'). Granovetter (1973, 1361) defines the strength of a network tie as 'a (probably linear) combination of the amount of time, the emotional intensity, the intimacy (mutual confiding) and the reciprocal services which characterize the tie'. Strong ties in the context of the present investigation refer to sources of external expertise who are well known to the owner-manager, and who may be either informal (e.g., family, friends) or professional advisors (e.g., consultants, enterprise support officers, accountants or solicitors). Previous research has suggested that strong and weak ties have different fortes and the choice of business advisors could thus be critical to business performance) While previous research results suggest that small businesses favour previously well known business advisors, prior studies have also pointed out that an excessive reliance on a network of well known actors might hamper business development, as the range of expertise available through strong ties is limited. But are owner-managers of small businesses aware of this limitation and does it matter to them? Or does working with a well-known advisor compensate for it? Hence, our research model first examines the impact of the strength of tie on the business advisor's perceived performance. Next, we ask what encourages a small business owner-manager to seek advice from a strong tie. A recent exploratory study by Welter and Kautonen (2005) drew attention to the central role of trust in this context. However, while their study found support for the general proposition that trust plays an important role in the choice of advisors, how trust and its different dimensions actually affect this choice remained ambiguous. The present paper develops this discussion by considering the impact of the different dimensions of perceived trustworthiness, defined as benevolence, integrity and ability, on the strength of tie. Further, we suggest that the dimensions of perceived trustworthiness relevant in the choice of a strong tie vary between professional and informal advisors. Methodology/Key Propositions Our propositions are examined empirically based on survey data comprising 153 Finnish small businesses. The data are analysed utilizing the partial least squares (PLS) approach to structural equation modelling with SmartPLS 2.0. Being non-parametric, the PLS algorithm is particularly well-suited to analysing small datasets with non-normally distributed variables. Results and Implications The path model shows that the stronger the tie, the more positively the advisor's performance is perceived. Hypothesis 1, that strong ties will be associated with higher perceptions of performance is clearly supported. Benevolence is clearly the most significant predictor of the choice of a strong tie for external expertise. While ability also reaches a moderate level of statistical significance, integrity does not have a statistically significant impact on the choice of a strong tie. Hence, we found support for two out of three independent variables included in Hypothesis 2. Path coefficients differed between the professional and informal advisor subsamples. The results of the exploratory group comparison show that Hypothesis 3a regarding ability being associated with strong ties more pronouncedly when choosing a professional advisor was not supported. Hypothesis 3b arguing that benevolence is more strongly associated with strong ties in the context of choosing an informal advisor received some support because the path coefficient in the informal advisor subsample was much larger than in the professional advisor subsample. Hypothesis 3c postulating that integrity would be more strongly associated with strong ties in the choice of a professional advisor was supported. Integrity is the most important dimension of trustworthiness in this context. However, integrity is of no concern, or even negative, when using strong ties to choose an informal advisor. The findings of this study have practical relevance to the enterprise support community. First of all, given that the strength of tie has a significant positive impact on the advisor's perceived performance, this implies that small business owners appreciate working with advisors in long-term relationships. Therefore, advisors are well advised to invest into relationship building and maintenance in their work with small firms. Secondly, the results show that, especially in the context of professional advisors, the advisor's perceived integrity and benevolence weigh more than ability. This again emphasizes the need to invest time and effort into building a personal relationship with the owner-manager, rather than merely maintaining a professional image and credentials. Finally, this study demonstrates that the dimensions of perceived trustworthiness are orthogonal with different effects on the strength of tie and ultimately perceived performance. This means that entrepreneurs and advisors should consider the specific dimensions of ability, benevolence and integrity, rather than rely on general perceptions of trustworthiness in their advice relationships.