122 resultados para Dynamic data analysis


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis proposes three novel models which extend the statistical methodology for motor unit number estimation, a clinical neurology technique. Motor unit number estimation is important in the treatment of degenerative muscular diseases and, potentially, spinal injury. Additionally, a recent and untested statistic to enable statistical model choice is found to be a practical alternative for larger datasets. The existing methods for dose finding in dual-agent clinical trials are found to be suitable only for designs of modest dimensions. The model choice case-study is the first of its kind containing interesting results using so-called unit information prior distributions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Structural damage detection using measured dynamic data for pattern recognition is a promising approach. These pattern recognition techniques utilize artificial neural networks and genetic algorithm to match pattern features. In this study, an artificial neural network–based damage detection method using frequency response functions is presented, which can effectively detect nonlinear damages for a given level of excitation. The main objective of this article is to present a feasible method for structural vibration–based health monitoring, which reduces the dimension of the initial frequency response function data and transforms it into new damage indices and employs artificial neural network method for detecting different levels of nonlinearity using recognized damage patterns from the proposed algorithm. Experimental data of the three-story bookshelf structure at Los Alamos National Laboratory are used to validate the proposed method. Results showed that the levels of nonlinear damages can be identified precisely by the developed artificial neural networks. Moreover, it is identified that artificial neural networks trained with summation frequency response functions give higher precise damage detection results compared to the accuracy of artificial neural networks trained with individual frequency response functions. The proposed method is therefore a promising tool for structural assessment in a real structure because it shows reliable results with experimental data for nonlinear damage detection which renders the frequency response function–based method convenient for structural health monitoring.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Value for money (VfM) on collaborative construction projects is dependent on the learning capabilities of the organisations and people involved. Within the context of infrastructure delivery, there is little research about the impact of organisational learning capability on project value. The literature contains a multiplicity of often un-testable definitions about organisational learning abilities. This paper defines learning capability as a dynamic capability that participant organisations purposely develop to add value to collaborative projects. The paper reports on a literature review that proposes a framework that conceptualises learning capability to explore the topic. This work is the first phase of a large-scale national survey funded by the Alliancing Association of Australasia and the Australian Research Council. Methodology Desk-top review of leading journals in the areas of strategic management, strategic alliances and construction management, as well as recent government documents and industry guidelines, was undertaken to synthesise, conceptualise and operationalise the concept of learning capability. The study primarily draws on the theoretical perspectives of the resource-based view of the firm (e.g. Barney 1991; Wernerfelt 1984), absorptive capacity (e.g. Cohen and Levinthal 1990; Zahra and George 2002); and dynamic capabilities (e.g. Helfat et al. 2007; Teece et al. 1997; Winter 2003). Content analysis of the literature was undertaken to identify key learning routines. Content analysis is a commonly used methodology in the social sciences area. It provides rich data through the systematic and objective review of literature (Krippendorff 2004). NVivo 9, a qualitative data analysis software package, was used to assist in this process. Findings and Future Research The review process resulted in a framework for the conceptualisation of learning capability that shows three phases of learning: (1) exploratory learning, (2) transformative learning and (3) exploitative learning. These phases combine both internal and external learning routines to influence project performance outcomes and thus VfM delivered under collaborative contracts. Sitting within these phases are eight categories of learning capability comprising knowledge articulation, identification, acquisition, dissemination, codification, internationalisation, transformation and application. The learning routines sitting within each category will be disaggregated in future research as the basis for measureable items in a large-scale survey study. The survey will examine the extent to which various learning routines influence project outcomes, as well as the relationships between them. This will involve identifying the routines that exist within organisations in the construction industry, their resourcing and rate of renewal, together with the extent of use and perceived value within the organisation. The target population is currently estimated to be around 1,000 professionals with experience in relational contracting in Australia. This future research will build on the learning capability framework to provide data that will assist construction organisations seeking to maximise VfM on construction projects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3%, outperforming a recent HMM-based approach which obtained 71.2%.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The importance of a thorough and systematic literature review has long been recognised across academic domains as critical to the foundation of new knowledge and theory evolution. Driven by an exponentially growing body of knowledge in the IS discipline, there has been a recent influx of guidance on how to conduct a literature review. As literature reviews are emerging as a standalone research method in itself, increasingly these method focused guidelines are of great interest, receiving acceptance at top tier IS publication outlets. Nevertheless, the finer details which offer justification for the selected content, and the effective presentation of supporting data has not been widely discussed in these method papers to date. This paper addresses this gap by exploring the concept of ‘literature profiling’ while arguing that it is a key aspect of a comprehensive literature review. The study establishes the importance of profiling for managing aspects such as quality assurance, transparency and the mitigation of selection bias. And then discusses how profiling can provide a valid basis for data analysis based on the attributes of selected literature. In essence, this study has conducted an archival analysis of literature (predominately from the IS domain) to present its main argument; the value for literature profiling, with supporting exemplary illustrations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Big data analysis in healthcare sector is still in its early stages when comparing with that of other business sectors due to numerous reasons. Accommodating the volume, velocity and variety of healthcare data Identifying platforms that examine data from multiple sources, such as clinical records, genomic data, financial systems, and administrative systems Electronic Health Record (EHR) is a key information resource for big data analysis and is also composed of varied co-created values. Successful integration and crossing of different subfields of healthcare data such as biomedical informatics and health informatics could lead to huge improvement for the end users of the health care system, i.e. the patients.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This review is focused on the impact of chemometrics for resolving data sets collected from investigations of the interactions of small molecules with biopolymers. These samples have been analyzed with various instrumental techniques, such as fluorescence, ultraviolet–visible spectroscopy, and voltammetry. The impact of two powerful and demonstrably useful multivariate methods for resolution of complex data—multivariate curve resolution–alternating least squares (MCR–ALS) and parallel factor analysis (PARAFAC)—is highlighted through analysis of applications involving the interactions of small molecules with the biopolymers, serum albumin, and deoxyribonucleic acid. The outcomes illustrated that significant information extracted by the chemometric methods was unattainable by simple, univariate data analysis. In addition, although the techniques used to collect data were confined to ultraviolet–visible spectroscopy, fluorescence spectroscopy, circular dichroism, and voltammetry, data profiles produced by other techniques may also be processed. Topics considered including binding sites and modes, cooperative and competitive small molecule binding, kinetics, and thermodynamics of ligand binding, and the folding and unfolding of biopolymers. Applications of the MCR–ALS and PARAFAC methods reviewed were primarily published between 2008 and 2013.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Increasingly larger scale applications are generating an unprecedented amount of data. However, the increasing gap between computation and I/O capacity on High End Computing machines makes a severe bottleneck for data analysis. Instead of moving data from its source to the output storage, in-situ analytics processes output data while simulations are running. However, in-situ data analysis incurs much more computing resource contentions with simulations. Such contentions severely damage the performance of simulation on HPE. Since different data processing strategies have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. In this paper, we explore and analyze several potential data-analytics placement strategies along the I/O path. To find out the best strategy to reduce data movement in given situation, we propose a flexible data analytics (FlexAnalytics) framework in this paper. Based on this framework, a FlexAnalytics prototype system is developed for analytics placement. FlexAnalytics system enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and visualization, as well as for large-scale data transfer. Two use cases – scientific data compression and remote visualization – have been applied in the study to verify the performance of FlexAnalytics. Experimental results demonstrate that FlexAnalytics framework increases data transition bandwidth and improves the application end-to-end transfer performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Drawing on multimodal texts produced by an Indigenous school community in Australia, I apply critical race theory and multimodal analysis (Jewitt, 2011) to decolonise digital heritage practices for Indigenous students. This study focuses on the particular ways in which students’ counter-narratives about race were embedded in multimodal and digital design in the development of a digital cultural heritage (Giaccardi, 2012). Data analysis involved applying multimodal analysis to the students’ Gamis, following social semiotic categories and principles theorised by Kress and Bezemer (2008), and Jewitt (2006, 2011). This includes attending to the following semiotic elements: visual design, movement and gesture, gaze, and recorded speech, and their interrelationships. The analysis also draws on critical race theory to interpret the students’ representations of race. In particular, the multimodal texts were analysed as a site for students’ views of Indigenous oppression in relation to the colonial powers and ownership of the land in Australian history (Ladson-Billings, 2009). Pedagogies that explore counter-narratives of cultural heritage in the official curriculum can encourage students to reframe their own racial identity, while challenging dominant white, historical narratives of colonial conquest, race, and power (Gutierrez, 2008). The children’s multimodal “Gami” videos, created with the iPad application, Tellagami, enabled the students to imagine hybrid, digital social identities and perspectives of Australian history that were tied to their Indigenous cultural heritage (Kamberelis, 2001).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes a linear quantile regression analysis method for longitudinal data that combines the between- and within-subject estimating functions, which incorporates the correlations between repeated measurements. Therefore, the proposed method results in more efficient parameter estimation relative to the estimating functions based on an independence working model. To reduce computational burdens, the induced smoothing method is introduced to obtain parameter estimates and their variances. Under some regularity conditions, the estimators derived by the induced smoothing method are consistent and have asymptotically normal distributions. A number of simulation studies are carried out to evaluate the performance of the proposed method. The results indicate that the efficiency gain for the proposed method is substantial especially when strong within correlations exist. Finally, a dataset from the audiology growth research is used to illustrate the proposed methodology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For clustered survival data, the traditional Gehan-type estimator is asymptotically equivalent to using only the between-cluster ranks, and the within-cluster ranks are ignored. The contribution of this paper is two fold: - (i) incorporating within-cluster ranks in censored data analysis, and; - (ii) applying the induced smoothing of Brown and Wang (2005, Biometrika) for computational convenience. Asymptotic properties of the resulting estimating functions are given. We also carry out numerical studies to assess the performance of the proposed approach and conclude that the proposed approach can lead to much improved estimators when strong clustering effects exist. A dataset from a litter-matched tumorigenesis experiment is used for illustration.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The article describes a generalized estimating equations approach that was used to investigate the impact of technology on vessel performance in a trawl fishery during 1988-96, while accounting for spatial and temporal correlations in the catch-effort data. Robust estimation of parameters in the presence of several levels of clustering depended more on the choice of cluster definition than on the choice of correlation structure within the cluster. Models with smaller cluster sizes produced stable results, while models with larger cluster sizes, that may have had complex within-cluster correlation structures and that had within-cluster covariates, produced estimates sensitive to the correlation structure. The preferred model arising from this dataset assumed that catches from a vessel were correlated in the same years and the same areas, but independent in different years and areas. The model that assumed catches from a vessel were correlated in all years and areas, equivalent to a random effects term for vessel, produced spurious results. This was an unexpected finding that highlighted the need to adopt a systematic strategy for modelling. The article proposes a modelling strategy of selecting the best cluster definition first, and the working correlation structure (within clusters) second. The article discusses the selection and interpretation of the model in the light of background knowledge of the data and utility of the model, and the potential for this modelling approach to apply in similar statistical situations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many educational researchers conducting studies in non-English speaking settings attempt to report on their project in English to boost their scholarly impact. It requires preparing and presenting translations of data collected from interviews and observations. This paper discusses the process and ethical considerations involved in this invisible methodological phase. The process includes activities prior to data analysis and to its presentation to be undertaken by the bilingual researcher as translator in order to convey participants’ original meanings as well as to establish and fulfil translation ethics. This paper offers strategies to address such issues; the most appropriate translation method for qualitative study; and approaches to address political issues when presenting such data.