890 resultados para Applied behaviour analysis
Resumo:
Measuring the business value that Internet technologies deliver for organisations has proven to be a difficult and elusive task, given their complexity and increased embeddedness within the value chain. Yet, despite the lack of empirical evidence that links the adoption of Information Technology (IT) with increased financial performance, many organisations continue to adopt new technologies at a rapid rate. This is evident in the widespread adoption of Web 2.0 online Social Networking Services (SNSs) such as Facebook, Twitter and YouTube. These new Internet based technologies, widely used for social purposes, are being employed by organisations to enhance their business communication processes. However, their use is yet to be correlated with an increase in business performance. Owing to the conflicting empirical evidence that links prior IT applications with increased business performance, IT, Information Systems (IS), and E-Business Model (EBM) research has increasingly looked to broader social and environmental factors as a means for examining and understanding the broader influences shaping IT, IS and E-Business (EB) adoption behaviour. Findings from these studies suggest that organisations adopt new technologies as a result of strong external pressures, rather than a clear measure of enhanced business value. In order to ascertain if this is the case with the adoption of SNSs, this study explores how organisations are creating value (and measuring that value) with the use of SNSs for business purposes, and the external pressures influencing their adoption. In doing so, it seeks to address two research questions: 1. What are the external pressures influencing organisations to adopt SNSs for business communication purposes? 2. Are SNSs providing increased business value for organisations, and if so, how is that value being captured and measured? Informed by the background literature fields of IT, IS, EBM, and Web 2.0, a three-tiered theoretical framework is developed that combines macro-societal, social and technological perspectives as possible causal mechanisms influencing the SNS adoption event. The macro societal view draws on the concept of Castells. (1996) network society and the behaviour of crowds, herds and swarms, to formulate a new explanatory concept of the network vortex. The social perspective draws on key components of institutional theory (DiMaggio & Powell, 1983, 1991), and the technical view draws from the organising vision concept developed by Swanson and Ramiller (1997). The study takes a critical realist approach, and conducts four stages of data collection and one stage of data coding and analysis. Stage 1 consisted of content analysis of websites and SNSs of many organisations, to identify the types of business purposes SNSs are being used for. Stage 2 also involved content analysis of organisational websites, in order to identify suitable sample organisations in which to conduct telephone interviews. Stage 3 consisted of conducting 18 in-depth, semi-structured telephone interviews within eight Australian organisations from the Media/Publishing and Galleries, Libraries, Archives and Museum (GLAM) industries. These sample organisations were considered leaders in the use of SNSs technologies. Stage 4 involved an SNS activity count of the organisations interviewed in Stage 3, in order to rate them as either Advanced Innovator (AI) organisations, or Learning Focussed (LF) organisations. A fifth stage of data coding and analysis of all four data collection stages was conducted, based on the theoretical framework developed for the study, and using QSR NVivo 8 software. The findings from this study reveal that SNSs have been adopted by organisations for the purpose of increasing business value, and as a result of strong social and macro-societal pressures. SNSs offer organisations a wide range of value enhancing opportunities that have broader benefits for customers and society. However, measuring the increased business value is difficult with traditional Return On Investment (ROI) mechanisms, ascertaining the need for new value capture and measurement rationales, to support the accountability of SNS adoption practices. The study also identified the presence of technical, social and macro-societal pressures, all of which influenced SNS adoption by organisations. These findings contribute important theoretical insight into the increased complexity of pressures influencing technology adoption rationales by organisations, and have important practical implications for practice, by reflecting the expanded global online networks in which organisations now operate. The limitations of the study include the small number of sample organisations in which interviews were conducted, its limited generalisability, and the small range of SNSs selected for the study. However, these were compensated in part by the expertise of the interviewees, and the global significance of the SNSs that were chosen. Future research could replicate the study to a larger sample from different industries, sectors and countries. It could also explore the life cycle of SNSs in a longitudinal study, and map how the technical, social and macro-societal pressures are emphasised through stages of the life cycle. The theoretical framework could also be applied to other social fad technology adoption studies.
Resumo:
This article presents a two-stage analytical framework that integrates ecological crop (animal) growth and economic frontier production models to analyse the productive efficiency of crop (animal) production systems. The ecological crop (animal) growth model estimates "potential" output levels given the genetic characteristics of crops (animals) and the physical conditions of locations where the crops (animals) are grown (reared). The economic frontier production model estimates "best practice" production levels, taking into account economic, institutional and social factors that cause farm and spatial heterogeneity. In the first stage, both ecological crop growth and economic frontier production models are estimated to calculate three measures of productive efficiency: (1) technical efficiency, as the ratio of actual to "best practice" output levels; (2) agronomic efficiency, as the ratio of actual to "potential" output levels; and (3) agro-economic efficiency, as the ratio of "best practice" to "potential" output levels. Also in the first stage, the economic frontier production model identifies factors that determine technical efficiency. In the second stage, agro-economic efficiency is analysed econometrically in relation to economic, institutional and social factors that cause farm and spatial heterogeneity. The proposed framework has several important advantages in comparison with existing proposals. Firstly, it allows the systematic incorporation of all physical, economic, institutional and social factors that cause farm and spatial heterogeneity in analysing the productive performance of crop and animal production systems. Secondly, the location-specific physical factors are not modelled symmetrically as other economic inputs of production. Thirdly, climate change and technological advancements in crop and animal sciences can be modelled in a "forward-looking" manner. Fourthly, knowledge in agronomy and data from experimental studies can be utilised for socio-economic policy analysis. The proposed framework can be easily applied in empirical studies due to the current availability of ecological crop (animal) growth models, farm or secondary data, and econometric software packages. The article highlights several directions of empirical studies that researchers may pursue in the future.
Resumo:
An automatic approach to road lane marking extraction from high-resolution aerial images is proposed, which can automatically detect the road surfaces in rural areas based on hierarchical image analysis. The procedure is facilitated by the road centrelines obtained from low-resolution images. The lane markings are further extracted on the generated road surfaces with 2D Gabor filters. The proposed method is applied on the aerial images of the Bruce Highway around Gympie, Queensland. Evaluation of the generated road surfaces and lane markings using four representative test fields has validated the proposed method.
Resumo:
Two decades after its inception, Latent Semantic Analysis(LSA) has become part and parcel of every modern introduction to Information Retrieval. For any tool that matures so quickly, it is important to check its lore and limitations, or else stagnation will set in. We focus here on the three main aspects of LSA that are well accepted, and the gist of which can be summarized as follows: (1) that LSA recovers latent semantic factors underlying the document space, (2) that such can be accomplished through lossy compression of the document space by eliminating lexical noise, and (3) that the latter can best be achieved by Singular Value Decomposition. For each aspect we performed experiments analogous to those reported in the LSA literature and compared the evidence brought to bear in each case. On the negative side, we show that the above claims about LSA are much more limited than commonly believed. Even a simple example may show that LSA does not recover the optimal semantic factors as intended in the pedagogical example used in many LSA publications. Additionally, and remarkably deviating from LSA lore, LSA does not scale up well: the larger the document space, the more unlikely that LSA recovers an optimal set of semantic factors. On the positive side, we describe new algorithms to replace LSA (and more recent alternatives as pLSA, LDA, and kernel methods) by trading its l2 space for an l1 space, thereby guaranteeing an optimal set of semantic factors. These algorithms seem to salvage the spirit of LSA as we think it was initially conceived.
Resumo:
The recent exponential rise in the number of behaviour disorders has been the focus of a wide range of commentaries, ranging from the pedagogic and the administrative, to the sociological, and even the legal. This book will be the first to apply, in a systematic and thorough manner, the ideas of the foundational discipline of philosophy. A number of philosophical tools are applied here, tools arising through the medium of the traditional philosophical debates, such as those concerning governance, truth, logic, ethics, free-will, law and language. Each forms a separate chapter, but together they constitute a comprehensive, rigorous and original insight into what is now an important set of concerns for all those interested in the governance of children. The intention is threefold: first, to demonstrate the utility, accessibility and effectiveness of philosophical ideas within this important academic area. Philosophy does not have to be regarded an arcane and esoteric discipline, with only limited contemporary application, far from it. Second, the book offers a new set of approaches and ideas for both researchers and practitioners within education, a field is in danger of continually using the same ideas, to endlessly repeat the same conclusions. Third, the book offers a viable alternative to the dominant psychological model which increasingly employs pathology as its central rationale for conduct. The book would not only be of interest to mainstream educators, and to those students and academics interested in philosophy, and more specifically, the application of philosophical ideas to educational issues, it would also be an appropriate text for courses on education and difference, and due to the breadth of the philosophical issues addressed, courses on applied philosophy.
Resumo:
The multifractal properties of two indices of geomagnetic activity, D st (representative of low latitudes) and a p (representative of the global geomagnetic activity), with the solar X-ray brightness, X l , during the period from 1 March 1995 to 17 June 2003 are examined using multifractal detrended fluctuation analysis (MF-DFA). The h(q) curves of D st and a p in the MF-DFA are similar to each other, but they are different from that of X l , indicating that the scaling properties of X l are different from those of D st and a p . Hence, one should not predict the magnitude of magnetic storms directly from solar X-ray observations. However, a strong relationship exists between the classes of the solar X-ray irradiance (the classes being chosen to separate solar flares of class X-M, class C, and class B or less, including no flares) in hourly measurements and the geomagnetic disturbances (large to moderate, small, or quiet) seen in D st and a p during the active period. Each time series was converted into a symbolic sequence using three classes. The frequency, yielding the measure representations, of the substrings in the symbolic sequences then characterizes the pattern of space weather events. Using the MF-DFA method and traditional multifractal analysis, we calculate the h(q), D(q), and τ (q) curves of the measure representations. The τ (q) curves indicate that the measure representations of these three indices are multifractal. On the basis of this three-class clustering, we find that the h(q), D(q), and τ (q) curves of the measure representations of these three indices are similar to each other for positive values of q. Hence, a positive flare storm class dependence is reflected in the scaling exponents h(q) in the MF-DFA and the multifractal exponents D(q) and τ (q). This finding indicates that the use of the solar flare classes could improve the prediction of the D st classes.
Resumo:
This report is an update of an earlier one produced in January 2010 (see Carrington et al. 2010) which remains as an ePrint through the project’s home page. The report considers extant data which have been sourced with respect to some of the consequences of violent acts, incidents, harms and risky behaviour involving males living in regional and remote Australia and which were available in public data bases at production.
Resumo:
Handling information overload online, from the user's point of view is a big challenge, especially when the number of websites is growing rapidly due to growth in e-commerce and other related activities. Personalization based on user needs is the key to solving the problem of information overload. Personalization methods help in identifying relevant information, which may be liked by a user. User profile and object profile are the important elements of a personalization system. When creating user and object profiles, most of the existing methods adopt two-dimensional similarity methods based on vector or matrix models in order to find inter-user and inter-object similarity. Moreover, for recommending similar objects to users, personalization systems use the users-users, items-items and users-items similarity measures. In most cases similarity measures such as Euclidian, Manhattan, cosine and many others based on vector or matrix methods are used to find the similarities. Web logs are high-dimensional datasets, consisting of multiple users, multiple searches with many attributes to each. Two-dimensional data analysis methods may often overlook latent relationships that may exist between users and items. In contrast to other studies, this thesis utilises tensors, the high-dimensional data models, to build user and object profiles and to find the inter-relationships between users-users and users-items. To create an improved personalized Web system, this thesis proposes to build three types of profiles: individual user, group users and object profiles utilising decomposition factors of tensor data models. A hybrid recommendation approach utilising group profiles (forming the basis of a collaborative filtering method) and object profiles (forming the basis of a content-based method) in conjunction with individual user profiles (forming the basis of a model based approach) is proposed for making effective recommendations. A tensor-based clustering method is proposed that utilises the outcomes of popular tensor decomposition techniques such as PARAFAC, Tucker and HOSVD to group similar instances. An individual user profile, showing the user's highest interest, is represented by the top dimension values, extracted from the component matrix obtained after tensor decomposition. A group profile, showing similar users and their highest interest, is built by clustering similar users based on tensor decomposed values. A group profile is represented by the top association rules (containing various unique object combinations) that are derived from the searches made by the users of the cluster. An object profile is created to represent similar objects clustered on the basis of their similarity of features. Depending on the category of a user (known, anonymous or frequent visitor to the website), any of the profiles or their combinations is used for making personalized recommendations. A ranking algorithm is also proposed that utilizes the personalized information to order and rank the recommendations. The proposed methodology is evaluated on data collected from a real life car website. Empirical analysis confirms the effectiveness of recommendations made by the proposed approach over other collaborative filtering and content-based recommendation approaches based on two-dimensional data analysis methods.
Resumo:
Bioinformatics involves analyses of biological data such as DNA sequences, microarrays and protein-protein interaction (PPI) networks. Its two main objectives are the identification of genes or proteins and the prediction of their functions. Biological data often contain uncertain and imprecise information. Fuzzy theory provides useful tools to deal with this type of information, hence has played an important role in analyses of biological data. In this thesis, we aim to develop some new fuzzy techniques and apply them on DNA microarrays and PPI networks. We will focus on three problems: (1) clustering of microarrays; (2) identification of disease-associated genes in microarrays; and (3) identification of protein complexes in PPI networks. The first part of the thesis aims to detect, by the fuzzy C-means (FCM) method, clustering structures in DNA microarrays corrupted by noise. Because of the presence of noise, some clustering structures found in random data may not have any biological significance. In this part, we propose to combine the FCM with the empirical mode decomposition (EMD) for clustering microarray data. The purpose of EMD is to reduce, preferably to remove, the effect of noise, resulting in what is known as denoised data. We call this method the fuzzy C-means method with empirical mode decomposition (FCM-EMD). We applied this method on yeast and serum microarrays, and the silhouette values are used for assessment of the quality of clustering. The results indicate that the clustering structures of denoised data are more reasonable, implying that genes have tighter association with their clusters. Furthermore we found that the estimation of the fuzzy parameter m, which is a difficult step, can be avoided to some extent by analysing denoised microarray data. The second part aims to identify disease-associated genes from DNA microarray data which are generated under different conditions, e.g., patients and normal people. We developed a type-2 fuzzy membership (FM) function for identification of diseaseassociated genes. This approach is applied to diabetes and lung cancer data, and a comparison with the original FM test was carried out. Among the ten best-ranked genes of diabetes identified by the type-2 FM test, seven genes have been confirmed as diabetes-associated genes according to gene description information in Gene Bank and the published literature. An additional gene is further identified. Among the ten best-ranked genes identified in lung cancer data, seven are confirmed that they are associated with lung cancer or its treatment. The type-2 FM-d values are significantly different, which makes the identifications more convincing than the original FM test. The third part of the thesis aims to identify protein complexes in large interaction networks. Identification of protein complexes is crucial to understand the principles of cellular organisation and to predict protein functions. In this part, we proposed a novel method which combines the fuzzy clustering method and interaction probability to identify the overlapping and non-overlapping community structures in PPI networks, then to detect protein complexes in these sub-networks. Our method is based on both the fuzzy relation model and the graph model. We applied the method on several PPI networks and compared with a popular protein complex identification method, the clique percolation method. For the same data, we detected more protein complexes. We also applied our method on two social networks. The results showed our method works well for detecting sub-networks and give a reasonable understanding of these communities.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
This thesis examines consumer initiated value co-creation behaviour in the context of convergent mobile online services using a Service-Dominant logic (SD logic) theoretical framework. It focuses on non-reciprocal marketing phenomena such as open innovation and user generated content whereby new viable business models are derived and consumer roles and community become essential to the success of business. Attention to customers. roles and personalised experiences in value co-creation has been recognised in the literature (e.g., Prahalad & Ramaswamy, 2000; Prahalad, 2004; Prahalad & Ramaswamy, 2004). Similarly, in a subsequent iteration of their 2004 version of the foundations of SD logic, Vargo and Lusch (2006) replaced the concept of value co-production with value co-creation and suggested that a value co-creation mindset is essential to underpin the firm-customer value creation relationship. Much of this focus, however, has been limited to firm initiated value co-creation (e.g., B2B or B2C), while consumer initiated value creation, particularly consumer-to-consumer (C2C) has received little attention in the SD logic literature. While it is recognised that not every consumer wishes to make the effort to engage extensively in co-creation processes (MacDonald & Uncles, 2009), some consumers may not be satisfied with a standard product, instead they engage in the effort required for personalisation that potentially leads to greater value for themselves, and which may benefit not only the firm, but other consumers as well. Literature suggests that there are consumers who do, and as a result initiate such behaviour and expend effort to engage in co-creation activity (e.g., Gruen, Osmonbekov and Czaplewski, 2006; 2007 MacDonald & Uncles, 2009). In terms of consumers. engagement in value proposition (co-production) and value actualisation (co-creation), SD logic (Vargo & Lusch, 2004, 2008) provides a new lens that enables marketing scholars to transcend existing marketing theory and facilitates marketing practitioners to initiate service centric and value co-creation oriented marketing practices. Although the active role of the consumer is acknowledged in the SD logic oriented literature, we know little about how and why consumers participate in a value co-creation process (Payne, Storbacka, & Frow, 2008). Literature suggests that researchers should focus on areas such as C2C interaction (Gummesson 2007; Nicholls 2010) and consumer experience sharing and co-creation (Belk 2009; Prahalad & Ramaswamy 2004). In particular, this thesis seeks to better understand consumer initiated value co-creation, which is aligned with the notion that consumers can be resource integrators (Baron & Harris, 2008) and more. The reason for this focus is that consumers today are more empowered in both online and offline contexts (Füller, Mühlbacher, Matzler, & Jawecki, 2009; Sweeney, 2007). Active consumers take initiatives to engage and co-create solutions with other active actors in the market for their betterment of life (Ballantyne & Varey, 2006; Grönroos & Ravald, 2009). In terms of the organisation of the thesis, this thesis first takes a „zoom-out. (Vargo & Lusch, 2011) approach and develops the Experience Co-Creation (ECo) framework that is aligned with balanced centricity (Gummesson, 2008) and Actor-to-Actor worldview (Vargo & Lusch, 2011). This ECo framework is based on an extended „SD logic friendly lexicon. (Lusch & Vargo, 2006): value initiation and value initiator, value-in-experience, betterment centricity and betterment outcomes, and experience co-creation contexts derived from five gaps identified from the SD logic literature review. The framework is also designed to accommodate broader marketing phenomena (i.e., both reciprocal and non-reciprocal marketing phenomena). After zooming out and establishing the ECo framework, the thesis takes a zoom-in approach and places attention back on the value co-creation process. Owing to the scope of the current research, this thesis focuses specifically on non-reciprocal value co-creation phenomena initiated by consumers in online communities. Two emergent concepts: User Experience Sharing (UES) and Co-Creative Consumers are proposed grounded in the ECo framework. Together, these two theorised concepts shed light on the following two propositions: (1) User Experience Sharing derives value-in-experience as consumers make initiative efforts to participate in value co-creation, and (2) Co-Creative Consumers are value initiators who perform UES. Three research questions were identified underpinning the scope of this research: RQ1: What factors influence consumers to exhibit User Experience Sharing behaviour? RQ2: Why do Co-Creative Consumers participate in User Experience Sharing as part of value co-creation behaviour? RQ3: What are the characteristics of Co-Creative Consumers? To answer these research questions, two theoretical models were developed: the User Experience Sharing Behaviour Model (UESBM) grounded in the Theory of Planned Behaviour framework, and the Co-Creative Consumer Motivation Model (CCMM) grounded in the Motivation, Opportunity, Ability framework. The models use SD logic consistent constructs and draw upon multiple streams of literature including consumer education, consumer psychology and consumer behaviour, and organisational psychology and organisational behaviour. These constructs include User Experience Sharing with Other Consumers (UESC), User Experience Sharing with Firms (UESF), Enjoyment in Helping Others (EIHO), Consumer Empowerment (EMP), Consumer Competence (COMP), and Intention to Engage in User Experience Sharing (INT), Attitudes toward User Experience Sharing (ATT) and Subjective Norm (SN) in the UESBM, and User Experience Sharing (UES), Consumer Citizenship (CIT), Relating Needs of Self (RELS) and Relating Needs of Others (RELO), Newness (NEW), Mavenism (MAV), Use Innovativeness (UI), Personal Initiative (PIN) and Communality (COMU) in the CCMM. Many of these constructs are relatively new to marketing and require further empirical evidence for support. Two studies were conducted to underpin the corresponding research questions. Study One was conducted to calibrate and re-specify the proposed models. Study Two was a replica study to confirm the proposed models. In Study One, data were collected from a PC DIY online community. In Study Two, a majority of data were collected from Apple product online communities. The data were examined using structural equation modelling and cluster analysis. Considering the nature of the forums, the Study One data is considered to reflect some characteristics of Prosumers and the Study Two data is considered to reflect some characteristics of Innovators. The results drawn from two independent samples (N = 326 and N = 294) provide empirical support for the overall structure theorised in the research models. The results in both models show that Enjoyment in Helping Others and Consumer Competence in the UESBM, and Consumer Citizenship and Relating Needs in CCMM have significant impacts on UES. The consistent results appeared in both Study One and Study Two. The results also support the conceptualisation of Co-Creative Consumers and indicate Co-Creative Consumers are individuals who are able to relate the needs of themselves and others and feel a responsibility to share their valuable personal experiences. In general, the results shed light on "How and why consumers voluntarily participate in the value co-creation process?. The findings provide evidence to conceptualise User Experience Sharing behaviour as well as the Co-Creative Consumer using the lens of SD logic. This research is a pioneering study that incorporates and empirically tests SD logic consistent constructs to examine a particular area of the logic – that is consumer initiated value co-creation behaviour. This thesis also informs practitioners about how to facilitate and understand factors that engage with either firm or consumer initiated online communities.
Resumo:
True stress-strain curve of railhead steel is required to investigate the behaviour of railhead under wheel loading through elasto-plastic Finite Element (FE) analysis. To reduce the rate of wear, the railhead material is hardened through annealing and quenching. The Australian standard rail sections are not fully hardened and hence suffer from non-uniform distribution of the material property; usage of average properties in the FE modelling can potentially induce error in the predicted plastic strains. Coupons obtained at varying depths of the railhead were, therefore, tested under axial tension and the strains were measured using strain gauges as well as an image analysis technique, known as the Particle Image Velocimetry (PIV). The head hardened steel exhibit existence of three distinct zones of yield strength; the yield strength as the ratio of the average yield strength provided in the standard (σyr=780MPa) and the corresponding depth as the ratio of the head hardened zone along the axis of symmetry are as follows: (1.17 σyr, 20%), (1.06 σyr, 20%- 80%) and (0.71 σyr, > 80%). The stress-strain curves exhibit limited plastic zone with fracture occurring at strain less than 0.1.
Resumo:
The natural convection thermal boundary layer adjacent to an inclined flat plate and inclined walls of an attic space subject to instantaneous and ramp heating and cooling is investigated. A scaling analysis has been performed to describe the flow behaviour and heat transfer. Major scales quantifying the flow velocity, flow development time, heat transfer and the thermal and viscous boundary layer thicknesses at different stages of the flow development are established. Scaling relations of heating-up and cooling-down times and heat transfer rates have also been reported for the case of attic space. The scaling relations have been verified by numerical simulations over a wide range of parameters. Further, a periodic temperature boundary condition is also considered to show the flow features in the attic space over diurnal cycles.
Resumo:
The design of artificial intelligence in computer games is an important component of a player's game play experience. As games are becoming more life-like and interactive, the need for more realistic game AI will increase. This is particularly the case with respect to AI that simulates how human players act, behave and make decisions. The purpose of this research is to establish a model of player-like behavior that may be effectively used to inform the design of artificial intelligence to more accurately mimic a player's decision making process. The research uses a qualitative analysis of player opinions and reactions while playing a first person shooter video game, with recordings of their in game actions, speech and facial characteristics. The initial studies provide player data that has been used to design a model of how a player behaves.