437 resultados para STRUCTURAL ASPECTS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines use of address terms by counsellors on a telephone counselling service for children and young people. Drawing on conversation analytic findings and methods, we show how personal names are used in the management of structural and interpersonal aspects of counselling interaction. Focusing on address terms in turn-beginnings - where a name is used as, or as part of, a preface - the analysis shows that address terms are used in turns that are not fitted with prior talk in terms of either the activity or affective stance of the client. We discuss two environments in which this practice is observed: in beginning turns that initiate a new action sequence, and in turns that challenge the client’s position. Our focus is on the use of client names in the context of producing disaligning or disaffiliative actions. In disaligned actions, counsellors produced sequentially disjunctive turns that regularly involved a return to a counselling agenda. In disaffiliative actions counsellors presented a stance that did not fit with the affective stance of the client in the prior turn, for instance, in disagreeing with or complimenting the client. The paper discusses how such turns invoke a counselling agenda and how name use is used in the management of rapport and trust in counselling interaction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Experience underlies all kinds of human knowledge and it is dependent on context. People’s experience within a particular context-of-use determines how they interact with products. Methods employed in this research to elicit human experience have included the use of visuals. This paper describes two empirical studies that employed visual representation of concepts as a means to explore the experiential and contextual component of user- product interactions. One study employed visuals that the participants produced during the study. The other employed visuals that the researcher used as prompts during a focus group session. This paper demonstrates that using visuals in design research is valuable for exploring and understanding the contextual aspects of human experience and its influence on people’s concepts of product use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Relational governance arrangements across agencies and sectors have become prevalent as a means for government to become more responsive and effective in addressing complex, large scale or ‘wicked’ problems. The primary characteristic of such ‘collaborative’ arrangements is the utilisation of the joint capacities of multiple organisations to achieve collaborative advantage, which Huxham (1993) defines as the attainment of creative outcomes that are beyond the ability of single agencies to achieve. Attaining collaborative advantage requires organisations to develop collaborative capabilities that prepare organisations for collaborative practice (Huxham, 1993b). Further, collaborations require considerable investment of staff effort that could potentially be used beneficially elsewhere by both the government and non-government organisations involved in collaboration (Keast and Mandell, 2010). Collaborative arrangements to deliver services therefore requires a reconsideration of the way in which resources, including human resources, are conceptualised and deployed as well as changes to both the structure of public service agencies and the systems and processes by which they operate (Keast, forthcoming). A main aim of academic research and theorising has been to explore and define the requisite characteristics to achieve collaborative advantage. Such research has tended to focus on definitional, structural (Turrini, Cristofoli, Frosini, & Nasi, 2009) and organisational (Huxham, 1993) aspects and less on the roles government plays within cross-organisational or cross-sectoral arrangements. Ferlie and Steane (2002) note that there has been a general trend towards management led reforms of public agencies including the HRM practices utilised. Such trends have been significantly influenced by New Public Management (NPM) ideology with limited consideration to the implications for HRM practice in collaborative, rather than market contexts. Utilising case study data of a suite of collaborative efforts in Queensland, Australia, collected over a decade, this paper presents an examination of the network roles government agencies undertake. Implications for HRM in public sector agencies working within networked arrangements are drawn and implications for job design, recruitment, deployment and staff development are presented. The paper also makes theoretical advances in our understanding of Strategic Human Resource Management (SHRM) in network settings. While networks form part of the strategic armoury of government, networks operate to achieve collaborative advantage. SHRM with its focus on competitive advantage is argued to be appropriate in market situations, however is not an ideal conceptualisation in network situations. Commencing with an overview of literature on networks and network effectiveness, the paper presents the case studies and methodology; provides findings from the case studies in regard to the roles of government to achieve collaborative advantage and implications for HRM practice are presented. Implications for SHRM are considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The chapter approaches resilience from an evolutionary psychology perspective. In recent years scientific studies have revealed many of the biological processes associated with resilient behaviour. The authors argue that the internal constitution and mental toughness of the individual will provide a core protection for life's inevitable tests. A nurtured developing brain 'in-utero' and a physically close dyadic relationship in the early years of life, are crucial to the provision of a resilient personality. Many descriptors of the construct of resilience presented in various studies are explored in this chapter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Customer perceived value is concerned with the experiences of consumers when using a service and is often referred to in the context of service provision or on the basis of service quality (Auh, et al., 2007; Chang, 2008; Jackson, 2007; Laukkanen, 2007; Padgett & Mulvey, 2007; Shamdasani, Mukherjee & Malhotra, 2008). Understanding customer perceived value has benefits for social marketing and allows scholars and practitioners alike to identify why consumers engage in positive social behaviours through the use of services. Understanding consumers’ use of wellness services in particular is important, because the use of wellness services demonstrates the fulfilment of social marketing aims; performing pro-active, positive social behaviours that are of benefit to the individual and to society (Andreasen, 1994). As consumers typically act out of self-interest (Rothschild, 1999), this research posits that a value proposition must be made to consumers in order to encourage behavioural change. Thus, this research seeks to identify how value is created for consumers of wellness services in social marketing. This results in the overall research question of this research: How is value created in social marketing wellness services? A traditional method towards understanding value has been the adoption of an economic approach, which considers the utility gained and where value is a direct outcome of a cost-benefit analysis (Payne & Holt, 1999). However, there has since been a shift towards the adoption of an experiential approach in understanding value. This experiential approach considers the consumption experience of the consumer which extends beyond the service exchange and includes pre- and post-consumption stages (Russell-Bennett, Previte & Zainuddin, 2009). As such, this research uses an experiential approach to identify the value that exists in social marketing wellness services. Four dimensions of value have been commonly conceptualised and identified in the commercial marketing literature; functional, emotional, social, and altruistic value (Holbrook, 1994; Sheth, Newman & Gross, 1991; Sweeney & Soutar, 2001). It is not known if these value dimensions also exist in social marketing. In addition, sources of value said to influence value dimensions have been conceptualised in the literature. Sources of value such as information, interaction, environment, service, customer co-creation, and social mandate have been conceptually identified both in the commercial and social marketing literature (Russell-Bennet, Previte & Zainuddin, 2009; Smith & Colgate, 2007). However, it is not clear which sources of value contribute to the creation of value for users of wellness services. Thus, this research seeks to explore these relationships. This research was conducted using a wellness service context, specifically breast cancer screening services. The primary target consumer of these services is women aged 50 to 69 years old (inclusive) who have never been diagnosed with breast cancer. It is recommended that women in this target group have a breast screen every 2 years in order to achieve the most effective medical outcomes from screening. A two-study mixed method approach was utilised. Study 1 was a qualitative exploratory study that analysed individual-depth interviews with 25 information-rich respondents. The interviews were transcribed verbatim and analysed using NVivo 8 software. The qualitative results provided evidence of the existence of the four value dimensions in social marketing. The results also allowed for the development of a typology of experiential value by synthesising current understanding of the value dimensions, with the activity aspects of experiential value identified by Holbrook (1994) and Mathwick, Malhotra and Rigdon (2001). The qualitative results also provided evidence for the existence of sources of value in social marketing, namely information, interaction, environment and consumer participation. In particular, a categorisation of sources of value was developed as a result of the findings from Study 1, which identify organisational, consumer, and third party sources of value. A proposed model of value co-creation and a set of hypotheses were developed based on the results of Study 1 for further testing in Study 2. Study 2 was a large-scale quantitative confirmatory study that sought to test the proposed model of value co-creation and the hypotheses developed. An online-survey was administered Australia-wide to women in the target audience. A response rate of 20.1% was achieved, resulting in a final sample of 797 useable responses after removing ineligible respondents. Reliability and validity analyses were conducted on the data, followed by Exploratory Factor Analysis (EFA) in PASW18, followed by Confirmatory Factor Analysis (CFA) in AMOS18. Following the preliminary analyses, the data was subject to Structural Equation Modelling (SEM) in AMOS18 to test the path relationships hypothesised in the proposed model of value creation. The SEM output revealed that all hypotheses were supported, with the exception of one relationship which was non-significant. In addition, post hoc tests revealed seven further significant non-hypothesised relationships in the model. The quantitative results show that organisational sources of value as well as consumer participation sources of value influence both functional and emotional dimensions of value. The experience of both functional and emotional value in wellness services leads to satisfaction with the experience, followed by behavioural intentions to perform the behaviour and use the service again. One of the significant non-hypothesised relationships revealed that emotional value leads to functional value in wellness services, providing further empirical evidence that emotional value features more prominently than functional value for users of wellness services. This research offers several contributions to theory and practice. Theoretically, this research addresses a gap in the literature by using social marketing theory to provide an alternative method of understanding individual behaviour in a domain that has been predominantly investigated in public health. This research also clarifies the concept of value and offers empirical evidence to show that value is a multi-dimensional construct with separate and distinct dimensions. Empirical evidence for a typology of experiential value, as well as a categorisation of sources of value is also provided. In its practical contributions, this research identifies a framework that is the value creation process and offers health services organisations a diagnostic tool to identify aspects of the service process that facilitate the value creation process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proteases regulate a spectrum of diverse physiological processes, and dysregulation of proteolytic activity drives a plethora of pathological conditions. Understanding protease function is essential to appreciating many aspects of normal physiology and progression of disease. Consequently, development of potent and specific inhibitors of proteolytic enzymes is vital to provide tools for the dissection of protease function in biological systems and for the treatment of diseases linked to aberrant proteolytic activity. The studies in this thesis describe the rational design of potent inhibitors of three proteases that are implicated in disease development. Additionally, key features of the interaction of proteases and their cognate inhibitors or substrates are analysed and a series of rational inhibitor design principles are expounded and tested. Rational design of protease inhibitors relies on a comprehensive understanding of protease structure and biochemistry. Analysis of known protease cleavage sites in proteins and peptides is a commonly used source of such information. However, model peptide substrate and protein sequences have widely differing levels of backbone constraint and hence can adopt highly divergent structures when binding to a protease’s active site. This may result in identical sequences in peptides and proteins having different conformations and diverse spatial distribution of amino acid functionalities. Regardless of this, protein and peptide cleavage sites are often regarded as being equivalent. One of the key findings in the following studies is a definitive demonstration of the lack of equivalence between these two classes of substrate and invalidation of the common practice of using the sequences of model peptide substrates to predict cleavage of proteins in vivo. Another important feature for protease substrate recognition is subsite cooperativity. This type of cooperativity is commonly referred to as protease or substrate binding subsite cooperativity and is distinct from allosteric cooperativity, where binding of a molecule distant from the protease active site affects the binding affinity of a substrate. Subsite cooperativity may be intramolecular where neighbouring residues in substrates are interacting, affecting the scissile bond’s susceptibility to protease cleavage. Subsite cooperativity can also be intermolecular where a particular residue’s contribution to binding affinity changes depending on the identity of neighbouring amino acids. Although numerous studies have identified subsite cooperativity effects, these findings are frequently ignored in investigations probing subsite selectivity by screening against diverse combinatorial libraries of peptides (positional scanning synthetic combinatorial library; PS-SCL). This strategy for determining cleavage specificity relies on the averaged rates of hydrolysis for an uncharacterised ensemble of peptide sequences, as opposed to the defined rate of hydrolysis of a known specific substrate. Further, since PS-SCL screens probe the preference of the various protease subsites independently, this method is inherently unable to detect subsite cooperativity. However, mean hydrolysis rates from PS-SCL screens are often interpreted as being comparable to those produced by single peptide cleavages. Before this study no large systematic evaluation had been made to determine the level of correlation between protease selectivity as predicted by screening against a library of combinatorial peptides and cleavage of individual peptides. This subject is specifically explored in the studies described here. In order to establish whether PS-SCL screens could accurately determine the substrate preferences of proteases, a systematic comparison of data from PS-SCLs with libraries containing individually synthesised peptides (sparse matrix library; SML) was carried out. These SML libraries were designed to include all possible sequence combinations of the residues that were suggested to be preferred by a protease using the PS-SCL method. SML screening against the three serine proteases kallikrein 4 (KLK4), kallikrein 14 (KLK14) and plasmin revealed highly preferred peptide substrates that could not have been deduced by PS-SCL screening alone. Comparing protease subsite preference profiles from screens of the two types of peptide libraries showed that the most preferred substrates were not detected by PS SCL screening as a consequence of intermolecular cooperativity being negated by the very nature of PS SCL screening. Sequences that are highly favoured as result of intermolecular cooperativity achieve optimal protease subsite occupancy, and thereby interact with very specific determinants of the protease. Identifying these substrate sequences is important since they may be used to produce potent and selective inhibitors of protolytic enzymes. This study found that highly favoured substrate sequences that relied on intermolecular cooperativity allowed for the production of potent inhibitors of KLK4, KLK14 and plasmin. Peptide aldehydes based on preferred plasmin sequences produced high affinity transition state analogue inhibitors for this protease. The most potent of these maintained specificity over plasma kallikrein (known to have a very similar substrate preference to plasmin). Furthermore, the efficiency of this inhibitor in blocking fibrinolysis in vitro was comparable to aprotinin, which previously saw clinical use to reduce perioperative bleeding. One substrate sequence particularly favoured by KLK4 was substituted into the 14 amino acid, circular sunflower trypsin inhibitor (SFTI). This resulted in a highly potent and selective inhibitor (SFTI-FCQR) which attenuated protease activated receptor signalling by KLK4 in vitro. Moreover, SFTI-FCQR and paclitaxel synergistically reduced growth of ovarian cancer cells in vitro, making this inhibitor a lead compound for further therapeutic development. Similar incorporation of a preferred KLK14 amino acid sequence into the SFTI scaffold produced a potent inhibitor for this protease. However, the conformationally constrained SFTI backbone enforced a different intramolecular cooperativity, which masked a KLK14 specific determinant. As a consequence, the level of selectivity achievable was lower than that found for the KLK4 inhibitor. Standard mechanism inhibitors such as SFTI rely on a stable acyl-enzyme intermediate for high affinity binding. This is achieved by a conformationally constrained canonical binding loop that allows for reformation of the scissile peptide bond after cleavage. Amino acid substitutions within the inhibitor to target a particular protease may compromise structural determinants that support the rigidity of the binding loop and thereby prevent the engineered inhibitor reaching its full potential. An in silico analysis was carried out to examine the potential for further improvements to the potency and selectivity of the SFTI-based KLK4 and KLK14 inhibitors. Molecular dynamics simulations suggested that the substitutions within SFTI required to target KLK4 and KLK14 had compromised the intramolecular hydrogen bond network of the inhibitor and caused a concomitant loss of binding loop stability. Furthermore in silico amino acid substitution revealed a consistent correlation between a higher frequency of formation and the number of internal hydrogen bonds of SFTI-variants and lower inhibition constants. These predictions allowed for the production of second generation inhibitors with enhanced binding affinity toward both targets and highlight the importance of considering intramolecular cooperativity effects when engineering proteins or circular peptides to target proteases. The findings from this study show that although PS-SCLs are a useful tool for high throughput screening of approximate protease preference, later refinement by SML screening is needed to reveal optimal subsite occupancy due to cooperativity in substrate recognition. This investigation has also demonstrated the importance of maintaining structural determinants of backbone constraint and conformation when engineering standard mechanism inhibitors for new targets. Combined these results show that backbone conformation and amino acid cooperativity have more prominent roles than previously appreciated in determining substrate/inhibitor specificity and binding affinity. The three key inhibitors designed during this investigation are now being developed as lead compounds for cancer chemotherapy, control of fibrinolysis and cosmeceutical applications. These compounds form the basis of a portfolio of intellectual property which will be further developed in the coming years.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research is one of several ongoing studies conducted within the IT Professional Services (ITPS) research programme at Queensland University of Technology (QUT). In 2003, ITPS introduced the IS-Impact model, a measurement model for measuring information systems success from the viewpoint of multiple stakeholders. The model, along with its instrument, is robust, simple, yet generalisable, and yields results that are comparable across time, stakeholders, different systems and system contexts. The IS-Impact model is defined as “a measure at a point in time, of the stream of net benefits from the Information System (IS), to date and anticipated, as perceived by all key-user-groups”. The model represents four dimensions, which are ‘Individual Impact’, ‘Organizational Impact’, ‘Information Quality’ and ‘System Quality’. The two Impact dimensions measure the up-to-date impact of the evaluated system, while the remaining two Quality dimensions act as proxies for probable future impacts (Gable, Sedera & Chan, 2008). To fulfil the goal of ITPS, “to develop the most widely employed model” this research re-validates and extends the IS-Impact model in a new context. This method/context-extension research aims to test the generalisability of the model by addressing known limitations of the model. One of the limitations of the model relates to the extent of external validity of the model. In order to gain wide acceptance, a model should be consistent and work well in different contexts. The IS-Impact model, however, was only validated in the Australian context, and packaged software was chosen as the IS understudy. Thus, this study is concerned with whether the model can be applied in another different context. Aiming for a robust and standardised measurement model that can be used across different contexts, this research re-validates and extends the IS-Impact model and its instrument to public sector organisations in Malaysia. The overarching research question (managerial question) of this research is “How can public sector organisations in Malaysia measure the impact of information systems systematically and effectively?” With two main objectives, the managerial question is broken down into two specific research questions. The first research question addresses the applicability (relevance) of the dimensions and measures of the IS-Impact model in the Malaysian context. Moreover, this research question addresses the completeness of the model in the new context. Initially, this research assumes that the dimensions and measures of the IS-Impact model are sufficient for the new context. However, some IS researchers suggest that the selection of measures needs to be done purposely for different contextual settings (DeLone & McLean, 1992, Rai, Lang & Welker, 2002). Thus, the first research question is as follows, “Is the IS-Impact model complete for measuring the impact of IS in Malaysian public sector organisations?” [RQ1]. The IS-Impact model is a multidimensional model that consists of four dimensions or constructs. Each dimension is represented by formative measures or indicators. Formative measures are known as composite variables because these measures make up or form the construct, or, in this case, the dimension in the IS-Impact model. These formative measures define different aspects of the dimension, thus, a measurement model of this kind needs to be tested not just on the structural relationship between the constructs but also the validity of each measure. In a previous study, the IS-Impact model was validated using formative validation techniques, as proposed in the literature (i.e., Diamantopoulos and Winklhofer, 2001, Diamantopoulos and Siguaw, 2006, Petter, Straub and Rai, 2007). However, there is potential for improving the validation testing of the model by adding more criterion or dependent variables. This includes identifying a consequence of the IS-Impact construct for the purpose of validation. Moreover, a different approach is employed in this research, whereby the validity of the model is tested using the Partial Least Squares (PLS) method, a component-based structural equation modelling (SEM) technique. Thus, the second research question addresses the construct validation of the IS-Impact model; “Is the IS-Impact model valid as a multidimensional formative construct?” [RQ2]. This study employs two rounds of surveys, each having a different and specific aim. The first is qualitative and exploratory, aiming to investigate the applicability and sufficiency of the IS-Impact dimensions and measures in the new context. This survey was conducted in a state government in Malaysia. A total of 77 valid responses were received, yielding 278 impact statements. The results from the qualitative analysis demonstrate the applicability of most of the IS-Impact measures. The analysis also shows a significant new measure having emerged from the context. This new measure was added as one of the System Quality measures. The second survey is a quantitative survey that aims to operationalise the measures identified from the qualitative analysis and rigorously validate the model. This survey was conducted in four state governments (including the state government that was involved in the first survey). A total of 254 valid responses were used in the data analysis. Data was analysed using structural equation modelling techniques, following the guidelines for formative construct validation, to test the validity and reliability of the constructs in the model. This study is the first research that extends the complete IS-Impact model in a new context that is different in terms of nationality, language and the type of information system (IS). The main contribution of this research is to present a comprehensive, up-to-date IS-Impact model, which has been validated in the new context. The study has accomplished its purpose of testing the generalisability of the IS-Impact model and continuing the IS evaluation research by extending it in the Malaysian context. A further contribution is a validated Malaysian language IS-Impact measurement instrument. It is hoped that the validated Malaysian IS-Impact instrument will encourage related IS research in Malaysia, and that the demonstrated model validity and generalisability will encourage a cumulative tradition of research previously not possible. The study entailed several methodological improvements on prior work, including: (1) new criterion measures for the overall IS-Impact construct employed in ‘identification through measurement relations’; (2) a stronger, multi-item ‘Satisfaction’ construct, employed in ‘identification through structural relations’; (3) an alternative version of the main survey instrument in which items are randomized (rather than blocked) for comparison with the main survey data, in attention to possible common method variance (no significant differences between these two survey instruments were observed); (4) demonstrates a validation process of formative indexes of a multidimensional, second-order construct (existing examples mostly involved unidimensional constructs); (5) testing the presence of suppressor effects that influence the significance of some measures and dimensions in the model; and (6) demonstrates the effect of an imbalanced number of measures within a construct to the contribution power of each dimension in a multidimensional model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper illustrates the damage identification and condition assessment of a three story bookshelf structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). A major obstacle of using measured frequency response function data is a large size input variables to ANNs. This problem is overcome by applying a data reduction technique called principal component analysis (PCA). In the proposed procedure, ANNs with their powerful pattern recognition and classification ability were used to extract damage information such as damage locations and severities from measured FRFs. Therefore, simple neural network models are developed, trained by Back Propagation (BP), to associate the FRFs with the damage or undamaged locations and severity of the damage of the structure. Finally, the effectiveness of the proposed method is illustrated and validated by using the real data provided by the Los Alamos National Laboratory, USA. The illustrated results show that the PCA based artificial Neural Network method is suitable and effective for damage identification and condition assessment of building structures. In addition, it is clearly demonstrated that the accuracy of proposed damage detection method can also be improved by increasing number of baseline datasets and number of principal components of the baseline dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, very few attempts have been made to explore the structure damage with noise polluted data which is unavoidable effect in real world. The measurement data are contaminated by noise because of test environment as well as electronic devices and this noise tend to give error results with structural damage identification methods. Therefore it is important to investigate a method which can perform better with noise polluted data. This paper introduces a new damage index using principal component analysis (PCA) for damage detection of building structures being able to accept noise polluted frequency response functions (FRFs) as input. The FRF data are obtained from the function datagen of MATLAB program which is available on the web site of the IASC-ASCE (International Association for Structural Control– American Society of Civil Engineers) Structural Health Monitoring (SHM) Task Group. The proposed method involves a five-stage process: calculation of FRFs, calculation of damage index values using proposed algorithm, development of the artificial neural networks and introducing damage indices as input parameters and damage detection of the structure. This paper briefly describes the methodology and the results obtained in detecting damage in all six cases of the benchmark study with different noise levels. The proposed method is applied to a benchmark problem sponsored by the IASC-ASCE Task Group on Structural Health Monitoring, which was developed in order to facilitate the comparison of various damage identification methods. The illustrated results show that the PCA-based algorithm is effective for structural health monitoring with noise polluted FRFs which is of common occurrence when dealing with industrial structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been much written about the Internet’s potential to enhance international market growth opportunities for SME’s. However, the literature is vague as to how Internet usage and the application of Internet marketing also known as Internet marketing intensity has an impact on firm international market growth. This paper examines the level and role of the Internet in the international operations of a sample of 218 Australian SMEs with international customers. This study shows evidence of a statistical relationship between Internet usage and Internet marketing intensity, which in turn leads to international market growth, in terms of increased sales from new customers in new countries, new customers in existing countries and from existing customers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 31st TTRA conference was held in California’s San Fernando Valley, home of Hollywood and Burbank’s movie and television studios. The twin themes of Hollywood and the new Millennium promised and delivered “something old, yet something new”. The meeting offered a historical summary, not only of the year in review but also of many features of travel research since the first literature in the field appeared in the 1970s. Also, the millennium theme set the scene for some stimulating and forward thinking discussions. The Hollywood location offered an opportunity to ponder on the value of the movie-induced tourism for Los Angeles, at a time when Hollywood Boulevard was in the midst of a much needed redevelopment programme. Hollywood Chamber of Commerce speaker Oscar Arslanian acknowledged that the face of the famous district had become tired, and that its ability to continue to attract visitors in the future lay in redeveloping its past heritage. In line with the Hollywood theme a feature of the conference was a series of six special sessions with “Stars of Travel Research”. These sessions featured: Clare Gunn, Stanley Plog, Charles Gouldner, John Hunt, Brent Ritchie, Geoffrey Crouch, Peter Williams, Douglas Frechtling, Turgut Var, Robert Christie-Mill, and John Crotts. Delegates were indeed privileged to hear from many of the pioneers of tourism research. Clare Gunn, Charles Goeldner, Turgut Var and Stanley Plog, for example, traced the history of different aspects of the tourism literature, and in line with the millennium theme, offered some thought provoking discussion on the future challenges facing tourism. These included; the commodotisation of airlines and destinations, airport and traffic congestion, environment sustainability responsibility and the looming burst of the baby-boomer bubble. Included in the conference proceedings are four papers presented by five of the “Stars”. Brent Ritchie and Geoffrey Crouch discuss the critical success factors for destinations, Clare Gunn shares his concerns about tourism being a smokestack industry, Doug Frechtling provides forecasts of outbound travel from 20 countries, and Charles Gouldner, who has attended all 31 TTRA conferences, reflects on the changes that have taken place in tourism research over 35 years...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the formulation for the free vibration of joined conical-cylindrical shells with uniform thickness using the transfer of influence coefficient for identification of structural characteristics. These characteristics are importance for structural health monitoring to develop model. This method was developed based on successive transmission of dynamic influence coefficients, which were defined as the relationships between the displacement and the force vectors at arbitrary nodal circles of the system. The two edges of the shell having arbitrary boundary conditions are supported by several elastic springs with meridional/axial, circumferential, radial and rotational stiffness, respectively. The governing equations of vibration of a conical shell, including a cylindrical shell, are written as a coupled set of first order differential equations by using the transfer matrix of the shell. Once the transfer matrix of a single component has been determined, the entire structure matrix is obtained by the product of each component matrix and the joining matrix. The natural frequencies and the modes of vibration were calculated numerically for joined conical-cylindrical shells. The validity of the present method is demonstrated through simple numerical examples, and through comparison with the results of previous researchers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As civil infrastructures such as bridges age, there is a concern for safety and a need for cost-effective and reliable monitoring tool. Different diagnostic techniques are available nowadays for structural health monitoring (SHM) of bridges. Acoustic emission is one such technique with potential of predicting failure. The phenomenon of rapid release of energy within a material by crack initiation or growth in form of stress waves is known as acoustic emission (AE). AEtechnique involves recording the stress waves bymeans of sensors and subsequent analysis of the recorded signals,which then convey information about the nature of the source. AE can be used as a local SHM technique to monitor specific regions with visible presence of cracks or crack prone areas such as welded regions and joints with bolted connection or as a global technique to monitor the whole structure. Strength of AE technique lies in its ability to detect active crack activity, thus helping in prioritising maintenance work by helping focus on active cracks rather than dormant cracks. In spite of being a promising tool, some challenges do still exist behind the successful application of AE technique. One is the generation of large amount of data during the testing; hence an effective data analysis and management is necessary, especially for long term monitoring uses. Complications also arise as a number of spurious sources can giveAEsignals, therefore, different source discrimination strategies are necessary to identify genuine signals from spurious ones. Another major challenge is the quantification of damage level by appropriate analysis of data. Intensity analysis using severity and historic indices as well as b-value analysis are some important methods and will be discussed and applied for analysis of laboratory experimental data in this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Airports represent the epitome of complex systems with multiple stakeholders, multiple jurisdictions and complex interactions between many actors. The large number of existing models that capture different aspects of the airport are a testament to this. However, these existing models do not consider in a systematic sense modelling requirements nor how stakeholders such as airport operators or airlines would make use of these models. This can detrimentally impact on the verification and validation of models and makes the development of extensible and reusable modelling tools difficult. This paper develops from the Concept of Operations (CONOPS) framework a methodology to help structure the review and development of modelling capabilities and usage scenarios. The method is applied to the review of existing airport terminal passenger models. It is found that existing models can be broadly categorised according to four usage scenarios: capacity planning, operational planning and design, security policy and planning, and airport performance review. The models, the performance metrics that they evaluate and their usage scenarios are discussed. It is found that capacity and operational planning models predominantly focus on performance metrics such as waiting time, service time and congestion whereas performance review models attempt to link those to passenger satisfaction outcomes. Security policy models on the other hand focus on probabilistic risk assessment. However, there is an emerging focus on the need to be able to capture trade-offs between multiple criteria such as security and processing time. Based on the CONOPS framework and literature findings, guidance is provided for the development of future airport terminal models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent times, light gauge steel framed (LSF) structures, such as cold-formed steel wall systems, are increasingly used, but without a full understanding of their fire performance. Traditionally the fire resistance rating of these load-bearing LSF wall systems is based on approximate prescriptive methods developed based on limited fire tests. Very often they are limited to standard wall configurations used by the industry. Increased fire rating is provided simply by adding more plasterboards to these walls. This is not an acceptable situation as it not only inhibits innovation and structural and cost efficiencies but also casts doubt over the fire safety of these wall systems. Hence a detailed fire research study into the performance of LSF wall systems was undertaken using full scale fire tests and extensive numerical studies. A new composite wall panel developed at QUT was also considered in this study, where the insulation was used externally between the plasterboards on both sides of the steel wall frame instead of locating it in the cavity. Three full scale fire tests of LSF wall systems built using the new composite panel system were undertaken at a higher load ratio using a gas furnace designed to deliver heat in accordance with the standard time temperature curve in AS 1530.4 (SA, 2005). Fire tests included the measurements of load-deformation characteristics of LSF walls until failure as well as associated time-temperature measurements across the thickness and along the length of all the specimens. Tests of LSF walls under axial compression load have shown the improvement to their fire performance and fire resistance rating when the new composite panel was used. Hence this research recommends the use of the new composite panel system for cold-formed LSF walls. The numerical study was undertaken using a finite element program ABAQUS. The finite element analyses were conducted under both steady state and transient state conditions using the measured hot and cold flange temperature distributions from the fire tests. The elevated temperature reduction factors for mechanical properties were based on the equations proposed by Dolamune Kankanamge and Mahendran (2011). These finite element models were first validated by comparing their results with experimental test results from this study and Kolarkar (2010). The developed finite element models were able to predict the failure times within 5 minutes. The validated model was then used in a detailed numerical study into the strength of cold-formed thin-walled steel channels used in both the conventional and the new composite panel systems to increase the understanding of their behaviour under nonuniform elevated temperature conditions and to develop fire design rules. The measured time-temperature distributions obtained from the fire tests were used. Since the fire tests showed that the plasterboards provided sufficient lateral restraint until the failure of LSF wall panels, this assumption was also used in the analyses and was further validated by comparison with experimental results. Hence in this study of LSF wall studs, only the flexural buckling about the major axis and local buckling were considered. A new fire design method was proposed using AS/NZS 4600 (SA, 2005), NAS (AISI, 2007) and Eurocode 3 Part 1.3 (ECS, 2006). The importance of considering thermal bowing, magnified thermal bowing and neutral axis shift in the fire design was also investigated. A spread sheet based design tool was developed based on the above design codes to predict the failure load ratio versus time and temperature for varying LSF wall configurations including insulations. Idealised time-temperature profiles were developed based on the measured temperature values of the studs. This was used in a detailed numerical study to fully understand the structural behaviour of LSF wall panels. Appropriate equations were proposed to find the critical temperatures for different composite panels, varying in steel thickness, steel grade and screw spacing for any load ratio. Hence useful and simple design rules were proposed based on the current cold-formed steel structures and fire design standards, and their accuracy and advantages were discussed. The results were also used to validate the fire design rules developed based on AS/NZS 4600 (SA, 2005) and Eurocode Part 1.3 (ECS, 2006). This demonstrated the significant improvements to the design method when compared to the currently used prescriptive design methods for LSF wall systems under fire conditions. In summary, this research has developed comprehensive experimental and numerical thermal and structural performance data for both the conventional and the proposed new load bearing LSF wall systems under standard fire conditions. Finite element models were developed to predict the failure times of LSF walls accurately. Idealized hot flange temperature profiles were developed for non-insulated, cavity and externally insulated load bearing wall systems. Suitable fire design rules and spread sheet based design tools were developed based on the existing standards to predict the ultimate failure load, failure times and failure temperatures of LSF wall studs. Simplified equations were proposed to find the critical temperatures for varying wall panel configurations and load ratios. The results from this research are useful to both structural and fire engineers and researchers. Most importantly, this research has significantly improved the knowledge and understanding of cold-formed LSF loadbearing walls under standard fire conditions.