911 resultados para Movement Data Analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the past few decades, the construction industry has experienced a series of changes including the innovation of construction technologies and the enhancement of management strategies. These improvements should have had a considerable effect on industrial efficiency and productivity performance, but research is needed to address whether the capital productivity levels of the construction industry have in fact shown such a huge improvement. This paper aims to develop an analysis procedure to measure capital productivity changes and to reasonably quantify factors affecting productivity levels in the construction industry. Based on the data envelopment analysis method, this research has developed a novel model measuring capital productivity and has applied it to the Australian construction industry. The numerical results indicate that the average annual capital productivity levels of the construction industry are slowly growing in all the Australian states and territories except for Queensland and Western Australia. In addition, construction technologies are shown to have a close relationship with the changes in capital productivity according to the temporal-spatial comparisons of productivity indices. The research findings are expected to be beneficial for making policy and strategic decisions to improve the capital productivity performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analysis and fusion of social measurements is important to understand what shapes the public’s opinion and the sustainability of the global development. However, modeling data collected from social responses is challenging as the data is typically complex and heterogeneous, which might take the form of stated facts, subjective assessment, choices, preferences or any combination thereof. Model-wise, these responses are a mixture of data types including binary, categorical, multicategorical, continuous, ordinal, count and rank data. The challenge is therefore to effectively handle mixed data in the a unified fusion framework in order to perform inference and analysis. To that end, this paper introduces eRBM (Embedded Restricted Boltzmann Machine) – a probabilistic latent variable model that can represent mixed data using a layer of hidden variables transparent across different types of data. The proposed model can comfortably support largescale data analysis tasks, including distribution modelling, data completion, prediction and visualisation. We demonstrate these versatile features on several moderate and large-scale publicly available social survey datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article brings together the disparate worlds of dance practice, motion capture and statistical analysis. Digital technologies such as motion capture offer dance artists new processes for recording and studying dance movement. Statistical analysis of these data can reveal hidden patterns in movement in ways that are semantically ‘blind’, and are hence able to challenge accepted culturo-physical ‘grammars’ of dance creation. The potential benefit to dance artists is to open up new ways of understanding choreographic movement. However, quantitative analysis does not allow for the uncertainty inherent in emergent, artistic practices such as dance. This article uses motion capture and principal component analysis (PCA), a common statistical technique in human movement recognition studies, to examine contemporary dance movement, and explores how this analysis might be interpreted in an artistic context to generate a new way of looking at the nature and role of movement patterning in dance creation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis addresses the problem of the academic identity of the area traditionally referred to as physical education. The study is a critical examination of the argu ments for the justi cation of this area as an autonomous branch of knowledge. The investigation concentrates on a selected number of arguments. The data collection comprised articles books and proceedings of conferences. The preliminary assessment of these materials resulted in a classi cation of the arguments into three groups. The rst group comprises the arguments in favour of physical education as an academic discipline. The second includes the arguments supporting a science of sport. The third consists of the arguments in favour of to a eld of human movement study. The examination of these arguments produced the following results. (a) The area of physical education does not satisfy the conditions presupposed by the de nition of academic discipline. This is so because the area does not form an integrated system of scienti c theories. (b) The same di culty emerges from the examination of the ar guments for sport science. There is no science of sport because there is no integrated system of scienti c theories related to sport. (c) The arguments in favour of a eld of study yielded more productive results. However di culties arise from the de nition of human movement. The analysis of this concept showed that its limits are not well demarcated. This makes it problematic to take human movement as the focus of a eld of studies. These aspects led to the conclusion that such things as an academic discipline of physical education sport science and eld of human movement studies do not exist. At least there are not such things in the sense of autonomous branches of knowledge. This does not imply that a more integrated inquiry based on several disciplines is not possible and desirable. This would enable someone entering phys ical education to nd a more organised structure of knowledge with some generally accepted problem situations procedures and theories on which to base professional practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dimensionality reduction is employed for visual data analysis as a way to obtaining reduced spaces for high dimensional data or to mapping data directly into 2D or 3D spaces. Although techniques have evolved to improve data segregation on reduced or visual spaces, they have limited capabilities for adjusting the results according to user's knowledge. In this paper, we propose a novel approach to handling both dimensionality reduction and visualization of high dimensional data, taking into account user's input. It employs Partial Least Squares (PLS), a statistical tool to perform retrieval of latent spaces focusing on the discriminability of the data. The method employs a training set for building a highly precise model that can then be applied to a much larger data set very effectively. The reduced data set can be exhibited using various existing visualization techniques. The training data is important to code user's knowledge into the loop. However, this work also devises a strategy for calculating PLS reduced spaces when no training data is available. The approach produces increasingly precise visual mappings as the user feeds back his or her knowledge and is capable of working with small and unbalanced training sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in biomedical signal acquisition systems for motion analysis have led to lowcost and ubiquitous wearable sensors which can be used to record movement data in different settings. This implies the potential availability of large amounts of quantitative data. It is then crucial to identify and to extract the information of clinical relevance from the large amount of available data. This quantitative and objective information can be an important aid for clinical decision making. Data mining is the process of discovering such information in databases through data processing, selection of informative data, and identification of relevant patterns. The databases considered in this thesis store motion data from wearable sensors (specifically accelerometers) and clinical information (clinical data, scores, tests). The main goal of this thesis is to develop data mining tools which can provide quantitative information to the clinician in the field of movement disorders. This thesis will focus on motor impairment in Parkinson's disease (PD). Different databases related to Parkinson subjects in different stages of the disease were considered for this thesis. Each database is characterized by the data recorded during a specific motor task performed by different groups of subjects. The data mining techniques that were used in this thesis are feature selection (a technique which was used to find relevant information and to discard useless or redundant data), classification, clustering, and regression. The aims were to identify high risk subjects for PD, characterize the differences between early PD subjects and healthy ones, characterize PD subtypes and automatically assess the severity of symptoms in the home setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A time series is a sequence of observations made over time. Examples in public health include daily ozone concentrations, weekly admissions to an emergency department or annual expenditures on health care in the United States. Time series models are used to describe the dependence of the response at each time on predictor variables including covariates and possibly previous values in the series. Time series methods are necessary to account for the correlation among repeated responses over time. This paper gives an overview of time series ideas and methods used in public health research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nitrogen and water are essential for plant growth and development. In this study, we designed experiments to produce gene expression data of poplar roots under nitrogen starvation and water deprivation conditions. We found low concentration of nitrogen led first to increased root elongation followed by lateral root proliferation and eventually increased root biomass. To identify genes regulating root growth and development under nitrogen starvation and water deprivation, we designed a series of data analysis procedures, through which, we have successfully identified biologically important genes. Differentially Expressed Genes (DEGs) analysis identified the genes that are differentially expressed under nitrogen starvation or drought. Protein domain enrichment analysis identified enriched themes (in same domains) that are highly interactive during the treatment. Gene Ontology (GO) enrichment analysis allowed us to identify biological process changed during nitrogen starvation. Based on the above analyses, we examined the local Gene Regulatory Network (GRN) and identified a number of transcription factors. After testing, one of them is a high hierarchically ranked transcription factor that affects root growth under nitrogen starvation. It is very tedious and time-consuming to analyze gene expression data. To avoid doing analysis manually, we attempt to automate a computational pipeline that now can be used for identification of DEGs and protein domain analysis in a single run. It is implemented in scripts of Perl and R.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of “resources-on-demand” and “pay-as-you-go”, scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client’s site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^