961 resultados para games analysis


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Making Design and Analysing Interaction track at the Participatory Innovation Conference calls for submissions from ‘Makers’ who will contribute examples of participatory innovation activities documented in video and ‘Analysts’ who will analyse those examples of participatory innovation activity. The aim of this paper is to open up for a discussion within the format of the track of the roles that designers could play in analysing the participatory innovation activities of others and to provide a starting point for this discussion through a concrete example of such ‘designerly analysis’. Designerly analysis opens new analytic frames for understanding participatory innovation and contributes to our understanding of design activities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bioinformatics involves analyses of biological data such as DNA sequences, microarrays and protein-protein interaction (PPI) networks. Its two main objectives are the identification of genes or proteins and the prediction of their functions. Biological data often contain uncertain and imprecise information. Fuzzy theory provides useful tools to deal with this type of information, hence has played an important role in analyses of biological data. In this thesis, we aim to develop some new fuzzy techniques and apply them on DNA microarrays and PPI networks. We will focus on three problems: (1) clustering of microarrays; (2) identification of disease-associated genes in microarrays; and (3) identification of protein complexes in PPI networks. The first part of the thesis aims to detect, by the fuzzy C-means (FCM) method, clustering structures in DNA microarrays corrupted by noise. Because of the presence of noise, some clustering structures found in random data may not have any biological significance. In this part, we propose to combine the FCM with the empirical mode decomposition (EMD) for clustering microarray data. The purpose of EMD is to reduce, preferably to remove, the effect of noise, resulting in what is known as denoised data. We call this method the fuzzy C-means method with empirical mode decomposition (FCM-EMD). We applied this method on yeast and serum microarrays, and the silhouette values are used for assessment of the quality of clustering. The results indicate that the clustering structures of denoised data are more reasonable, implying that genes have tighter association with their clusters. Furthermore we found that the estimation of the fuzzy parameter m, which is a difficult step, can be avoided to some extent by analysing denoised microarray data. The second part aims to identify disease-associated genes from DNA microarray data which are generated under different conditions, e.g., patients and normal people. We developed a type-2 fuzzy membership (FM) function for identification of diseaseassociated genes. This approach is applied to diabetes and lung cancer data, and a comparison with the original FM test was carried out. Among the ten best-ranked genes of diabetes identified by the type-2 FM test, seven genes have been confirmed as diabetes-associated genes according to gene description information in Gene Bank and the published literature. An additional gene is further identified. Among the ten best-ranked genes identified in lung cancer data, seven are confirmed that they are associated with lung cancer or its treatment. The type-2 FM-d values are significantly different, which makes the identifications more convincing than the original FM test. The third part of the thesis aims to identify protein complexes in large interaction networks. Identification of protein complexes is crucial to understand the principles of cellular organisation and to predict protein functions. In this part, we proposed a novel method which combines the fuzzy clustering method and interaction probability to identify the overlapping and non-overlapping community structures in PPI networks, then to detect protein complexes in these sub-networks. Our method is based on both the fuzzy relation model and the graph model. We applied the method on several PPI networks and compared with a popular protein complex identification method, the clique percolation method. For the same data, we detected more protein complexes. We also applied our method on two social networks. The results showed our method works well for detecting sub-networks and give a reasonable understanding of these communities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Organizations seeking improvements in their performance are increasingly exploring alternative models and approaches for providing support services; one such approach being Shared Services. Because of the possible consequential impact of Shared Services on organizations, and given that information systems (IS) is both an enabler of Shared Services (for other functional areas) as well as a promising area for Shared Services application, Shared Services is an important area for research in the IS field. Though Shared Services has been extensively adopted on the promise of economies of scale and scope, factors of Shared Services success (or failure) have received little research attention. This paper reports the distillation of success and failure factors of Shared Services from an IS perspective. Employing NVIVO and content analysis of 158 selected articles, 9 key success factors and 5 failure factors are identified, suggesting important implications for practice and further research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article continues the critical analysis of ‘meaningful relationships’ in the context of the operation of the ‘twin pillars’ which underpin the parenting provisions. It will be argued that the attitude of judicial officers to three key questions influence how they interpret this concept and consequently apply the best interest considerations. Relevant to this discussion is an examination of the Full Court’s approach to the key parenting sections, particularly the interaction of the primary and additional considerations. Against this backdrop, a current proposal to amend the ‘twin pillars’ will be examined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many “user centred design” methods, participants are used as informants to provide data but they are not involved in further analysis of that data. This paper investigates a participatory analysis approach in order to identify the strengths and weaknesses of involving participants collaboratively in the requirements analysis process. Findings show that participants are able to use information that they themselves have provided to analyse requirements and to draw upon that analysis for design, producing insights and suggestions that might not have been available otherwise to the design team. The contribution of this paper is to demonstrate an example of a participatory analysis process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of the large number of terms, patterns, and noise. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern-based methods should perform better than term- based ones in describing user preferences, but many experiments do not support this hypothesis. This research presents a promising method, Relevance Feature Discovery (RFD), for solving this challenging issue. It discovers both positive and negative patterns in text documents as high-level features in order to accurately weight low-level features (terms) based on their specificity and their distributions in the high-level features. The thesis also introduces an adaptive model (called ARFD) to enhance the exibility of using RFD in adaptive environment. ARFD automatically updates the system's knowledge based on a sliding window over new incoming feedback documents. It can efficiently decide which incoming documents can bring in new knowledge into the system. Substantial experiments using the proposed models on Reuters Corpus Volume 1 and TREC topics show that the proposed models significantly outperform both the state-of-the-art term-based methods underpinned by Okapi BM25, Rocchio or Support Vector Machine and other pattern-based methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

True stress-strain curve of railhead steel is required to investigate the behaviour of railhead under wheel loading through elasto-plastic Finite Element (FE) analysis. To reduce the rate of wear, the railhead material is hardened through annealing and quenching. The Australian standard rail sections are not fully hardened and hence suffer from non-uniform distribution of the material property; usage of average properties in the FE modelling can potentially induce error in the predicted plastic strains. Coupons obtained at varying depths of the railhead were, therefore, tested under axial tension and the strains were measured using strain gauges as well as an image analysis technique, known as the Particle Image Velocimetry (PIV). The head hardened steel exhibit existence of three distinct zones of yield strength; the yield strength as the ratio of the average yield strength provided in the standard (σyr=780MPa) and the corresponding depth as the ratio of the head hardened zone along the axis of symmetry are as follows: (1.17 σyr, 20%), (1.06 σyr, 20%- 80%) and (0.71 σyr, > 80%). The stress-strain curves exhibit limited plastic zone with fracture occurring at strain less than 0.1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The narrative section of annual reports has considerable value to its user groups, such as financial analysts and investors (Tiexiera, 2004; Barlett and Chandler, 1997, IASB, 2006). This narrative section including chairpersons’/presidents’ statement contains twice the quantity of information than the financial statements section (Smith and Taffler, 2000). However, the abundance of information does not necessarily enhance the quality of such information (IASB, 2006). This issue of qualitative characteristics has been long foregone by researchers. This issue has attracted the attention of IASB (2006). Following the dearth in research in regard to qualitative characteristics of reporting this paper explores whether investors’ required qualitative characteristics as outlined by the IASB (2006) have been satisfied in management commentary section of New Zealand companies’ annual reports. Our result suggests that the principal stakeholders’, that is, investors’ qualitative characteristics requirements have been partially met in this section of annual reports. The qualitative characteristic of ‘relevance’ and ‘supportability’ have been satisfied in more annual reports compared to that of ‘balance’ and ‘comparability.’

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The natural convection thermal boundary layer adjacent to an inclined flat plate and inclined walls of an attic space subject to instantaneous and ramp heating and cooling is investigated. A scaling analysis has been performed to describe the flow behaviour and heat transfer. Major scales quantifying the flow velocity, flow development time, heat transfer and the thermal and viscous boundary layer thicknesses at different stages of the flow development are established. Scaling relations of heating-up and cooling-down times and heat transfer rates have also been reported for the case of attic space. The scaling relations have been verified by numerical simulations over a wide range of parameters. Further, a periodic temperature boundary condition is also considered to show the flow features in the attic space over diurnal cycles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design of artificial intelligence in computer games is an important component of a player's game play experience. As games are becoming more life-like and interactive, the need for more realistic game AI will increase. This is particularly the case with respect to AI that simulates how human players act, behave and make decisions. The purpose of this research is to establish a model of player-like behavior that may be effectively used to inform the design of artificial intelligence to more accurately mimic a player's decision making process. The research uses a qualitative analysis of player opinions and reactions while playing a first person shooter video game, with recordings of their in game actions, speech and facial characteristics. The initial studies provide player data that has been used to design a model of how a player behaves.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: Colorectal cancer patients diagnosed with stage I or II disease are not routinely offered adjuvant chemotherapy following resection of the primary tumor. However, up to 10% of stage I and 30% of stage II patients relapse within 5 years of surgery from recurrent or metastatic disease. The aim of this study was to determine if tumor-associated markers could detect disseminated malignant cells and so identify a subgroup of patients with early-stage colorectal cancer that were at risk of relapse. Experimental Design: We recruited consecutive patients undergoing curative resection for early-stage colorectal cancer. Immunobead reverse transcription-PCR of five tumor-associated markers (carcinoembryonic antigen, laminin γ2, ephrin B4, matrilysin, and cytokeratin 20) was used to detect the presence of colon tumor cells in peripheral blood and within the peritoneal cavity of colon cancer patients perioperatively. Clinicopathologic variables were tested for their effect on survival outcomes in univariate analyses using the Kaplan-Meier method. A multivariate Cox proportional hazards regression analysis was done to determine whether detection of tumor cells was an independent prognostic marker for disease relapse. Results: Overall, 41 of 125 (32.8%) early-stage patients were positive for disseminated tumor cells. Patients who were marker positive for disseminated cells in post-resection lavage samples showed a significantly poorer prognosis (hazard ratio, 6.2; 95% confidence interval, 1.9-19.6; P = 0.002), and this was independent of other risk factors. Conclusion: The markers used in this study identified a subgroup of early-stage patients at increased risk of relapse post-resection for primary colorectal cancer. This method may be considered as a new diagnostic tool to improve the staging and management of colorectal cancer. © 2006 American Association for Cancer Research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern technology now has the ability to generate large datasets over space and time. Such data typically exhibit high autocorrelations over all dimensions. The field trial data motivating the methods of this paper were collected to examine the behaviour of traditional cropping and to determine a cropping system which could maximise water use for grain production while minimising leakage below the crop root zone. They consist of moisture measurements made at 15 depths across 3 rows and 18 columns, in the lattice framework of an agricultural field. Bayesian conditional autoregressive (CAR) models are used to account for local site correlations. Conditional autoregressive models have not been widely used in analyses of agricultural data. This paper serves to illustrate the usefulness of these models in this field, along with the ease of implementation in WinBUGS, a freely available software package. The innovation is the fitting of separate conditional autoregressive models for each depth layer, the ‘layered CAR model’, while simultaneously estimating depth profile functions for each site treatment. Modelling interest also lay in how best to model the treatment effect depth profiles, and in the choice of neighbourhood structure for the spatial autocorrelation model. The favoured model fitted the treatment effects as splines over depth, and treated depth, the basis for the regression model, as measured with error, while fitting CAR neighbourhood models by depth layer. It is hierarchical, with separate onditional autoregressive spatial variance components at each depth, and the fixed terms which involve an errors-in-measurement model treat depth errors as interval-censored measurement error. The Bayesian framework permits transparent specification and easy comparison of the various complex models compared.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Request For Proposal (RFP) with the design‐build (DB) procurement arrangement is a document in which an owner develops his requirements and conveys the project scope to DB contractors. Owners should provide an appropriate level of design in DB RFPs to adequately describe their requirements without compromising the prospects for innovation. This paper examines and compares the different levels of owner‐provided design in DB RFPs by the content analysis of 84 requests for RFPs for public DB projects advertised between 2000 and 2010 with an aggregate contract value of over $5.4 billion. A statistical analysis was also conducted in order to explore the relationship between the proportion of owner‐provided design and other project information, including project type, advertisement time, project size, contractor selection method, procurement process and contract type. The results show that the majority (64.8%) of the RFPs provide less than 10% of the owner‐provided design. The owner‐provided design proportion has a significant association with project type, project size, contractor selection method and contract type. In addition, owners are generally providing less design in recent years than hitherto. The research findings also provide owners with perspectives to determine the appropriate level of owner‐provided design in DB RFPs.