307 resultados para Data analysis


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Background The evaluation of the hand function is an essential element within the clinical practice. The usual assessments are focus on the ability to perform activities of daily life. The inclusion of instruments to measure kinematic variables provides a new approach to the assessment. Inertial sensors adapted to the hand could be used as a complementary instrument to the traditional assessment. Material: clinimetric assessment (Upper Limb Functional Index, Quick Dash), antrophometric variables (eight and weight), dynamometry (palm preasure) was taken. Functional analysis was made with Acceleglove system for the right hand and computer system. The glove has six acceleration sensor, one on each finger and another one on the reverse palm. Method Analytic, transversal approach. Ten healthy subject made six task on evaluation table (tripod pinch, lateral pinch and tip pinch, extension grip, spherical grip and power grip). Each task was made and measure three times, the second one was analyze for the results section. A Matlab script was created for the analysis of each movement and detection phase based on module vector. Results The module acceleration vector offers useful information of the hand function. The data analysis obtained during the performance of functional gestures allows to identify five different phases within the movement, three static phase and tow dynamic, each module vector was allied to one task. Conclusion Module vector variables could be used for the analysis of the different task made by the hand. Inertial sensor could be use as a complement for the traditional assessment system.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Big data analysis in healthcare sector is still in its early stages when comparing with that of other business sectors due to numerous reasons. Accommodating the volume, velocity and variety of healthcare data Identifying platforms that examine data from multiple sources, such as clinical records, genomic data, financial systems, and administrative systems Electronic Health Record (EHR) is a key information resource for big data analysis and is also composed of varied co-created values. Successful integration and crossing of different subfields of healthcare data such as biomedical informatics and health informatics could lead to huge improvement for the end users of the health care system, i.e. the patients.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The concept of big data has already outperformed traditional data management efforts in almost all industries. Other instances it has succeeded in obtaining promising results that provide value from large-scale integration and analysis of heterogeneous data sources for example Genomic and proteomic information. Big data analytics have become increasingly important in describing the data sets and analytical techniques in software applications that are so large and complex due to its significant advantages including better business decisions, cost reduction and delivery of new product and services [1]. In a similar context, the health community has experienced not only more complex and large data content, but also information systems that contain a large number of data sources with interrelated and interconnected data attributes. That have resulted in challenging, and highly dynamic environments leading to creation of big data with its enumerate complexities, for instant sharing of information with the expected security requirements of stakeholders. When comparing big data analysis with other sectors, the health sector is still in its early stages. Key challenges include accommodating the volume, velocity and variety of healthcare data with the current deluge of exponential growth. Given the complexity of big data, it is understood that while data storage and accessibility are technically manageable, the implementation of Information Accountability measures to healthcare big data might be a practical solution in support of information security, privacy and traceability measures. Transparency is one important measure that can demonstrate integrity which is a vital factor in the healthcare service. Clarity about performance expectations is considered to be another Information Accountability measure which is necessary to avoid data ambiguity and controversy about interpretation and finally, liability [2]. According to current studies [3] Electronic Health Records (EHR) are key information resources for big data analysis and is also composed of varied co-created values [3]. Common healthcare information originates from and is used by different actors and groups that facilitate understanding of the relationship for other data sources. Consequently, healthcare services often serve as an integrated service bundle. Although a critical requirement in healthcare services and analytics, it is difficult to find a comprehensive set of guidelines to adopt EHR to fulfil the big data analysis requirements. Therefore as a remedy, this research work focus on a systematic approach containing comprehensive guidelines with the accurate data that must be provided to apply and evaluate big data analysis until the necessary decision making requirements are fulfilled to improve quality of healthcare services. Hence, we believe that this approach would subsequently improve quality of life.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This review is focused on the impact of chemometrics for resolving data sets collected from investigations of the interactions of small molecules with biopolymers. These samples have been analyzed with various instrumental techniques, such as fluorescence, ultraviolet–visible spectroscopy, and voltammetry. The impact of two powerful and demonstrably useful multivariate methods for resolution of complex data—multivariate curve resolution–alternating least squares (MCR–ALS) and parallel factor analysis (PARAFAC)—is highlighted through analysis of applications involving the interactions of small molecules with the biopolymers, serum albumin, and deoxyribonucleic acid. The outcomes illustrated that significant information extracted by the chemometric methods was unattainable by simple, univariate data analysis. In addition, although the techniques used to collect data were confined to ultraviolet–visible spectroscopy, fluorescence spectroscopy, circular dichroism, and voltammetry, data profiles produced by other techniques may also be processed. Topics considered including binding sites and modes, cooperative and competitive small molecule binding, kinetics, and thermodynamics of ligand binding, and the folding and unfolding of biopolymers. Applications of the MCR–ALS and PARAFAC methods reviewed were primarily published between 2008 and 2013.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Increasingly larger scale applications are generating an unprecedented amount of data. However, the increasing gap between computation and I/O capacity on High End Computing machines makes a severe bottleneck for data analysis. Instead of moving data from its source to the output storage, in-situ analytics processes output data while simulations are running. However, in-situ data analysis incurs much more computing resource contentions with simulations. Such contentions severely damage the performance of simulation on HPE. Since different data processing strategies have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. In this paper, we explore and analyze several potential data-analytics placement strategies along the I/O path. To find out the best strategy to reduce data movement in given situation, we propose a flexible data analytics (FlexAnalytics) framework in this paper. Based on this framework, a FlexAnalytics prototype system is developed for analytics placement. FlexAnalytics system enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and visualization, as well as for large-scale data transfer. Two use cases – scientific data compression and remote visualization – have been applied in the study to verify the performance of FlexAnalytics. Experimental results demonstrate that FlexAnalytics framework increases data transition bandwidth and improves the application end-to-end transfer performance.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Drawing on multimodal texts produced by an Indigenous school community in Australia, I apply critical race theory and multimodal analysis (Jewitt, 2011) to decolonise digital heritage practices for Indigenous students. This study focuses on the particular ways in which students’ counter-narratives about race were embedded in multimodal and digital design in the development of a digital cultural heritage (Giaccardi, 2012). Data analysis involved applying multimodal analysis to the students’ Gamis, following social semiotic categories and principles theorised by Kress and Bezemer (2008), and Jewitt (2006, 2011). This includes attending to the following semiotic elements: visual design, movement and gesture, gaze, and recorded speech, and their interrelationships. The analysis also draws on critical race theory to interpret the students’ representations of race. In particular, the multimodal texts were analysed as a site for students’ views of Indigenous oppression in relation to the colonial powers and ownership of the land in Australian history (Ladson-Billings, 2009). Pedagogies that explore counter-narratives of cultural heritage in the official curriculum can encourage students to reframe their own racial identity, while challenging dominant white, historical narratives of colonial conquest, race, and power (Gutierrez, 2008). The children’s multimodal “Gami” videos, created with the iPad application, Tellagami, enabled the students to imagine hybrid, digital social identities and perspectives of Australian history that were tied to their Indigenous cultural heritage (Kamberelis, 2001).

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper proposes a linear quantile regression analysis method for longitudinal data that combines the between- and within-subject estimating functions, which incorporates the correlations between repeated measurements. Therefore, the proposed method results in more efficient parameter estimation relative to the estimating functions based on an independence working model. To reduce computational burdens, the induced smoothing method is introduced to obtain parameter estimates and their variances. Under some regularity conditions, the estimators derived by the induced smoothing method are consistent and have asymptotically normal distributions. A number of simulation studies are carried out to evaluate the performance of the proposed method. The results indicate that the efficiency gain for the proposed method is substantial especially when strong within correlations exist. Finally, a dataset from the audiology growth research is used to illustrate the proposed methodology.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

For clustered survival data, the traditional Gehan-type estimator is asymptotically equivalent to using only the between-cluster ranks, and the within-cluster ranks are ignored. The contribution of this paper is two fold: - (i) incorporating within-cluster ranks in censored data analysis, and; - (ii) applying the induced smoothing of Brown and Wang (2005, Biometrika) for computational convenience. Asymptotic properties of the resulting estimating functions are given. We also carry out numerical studies to assess the performance of the proposed approach and conclude that the proposed approach can lead to much improved estimators when strong clustering effects exist. A dataset from a litter-matched tumorigenesis experiment is used for illustration.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The article describes a generalized estimating equations approach that was used to investigate the impact of technology on vessel performance in a trawl fishery during 1988-96, while accounting for spatial and temporal correlations in the catch-effort data. Robust estimation of parameters in the presence of several levels of clustering depended more on the choice of cluster definition than on the choice of correlation structure within the cluster. Models with smaller cluster sizes produced stable results, while models with larger cluster sizes, that may have had complex within-cluster correlation structures and that had within-cluster covariates, produced estimates sensitive to the correlation structure. The preferred model arising from this dataset assumed that catches from a vessel were correlated in the same years and the same areas, but independent in different years and areas. The model that assumed catches from a vessel were correlated in all years and areas, equivalent to a random effects term for vessel, produced spurious results. This was an unexpected finding that highlighted the need to adopt a systematic strategy for modelling. The article proposes a modelling strategy of selecting the best cluster definition first, and the working correlation structure (within clusters) second. The article discusses the selection and interpretation of the model in the light of background knowledge of the data and utility of the model, and the potential for this modelling approach to apply in similar statistical situations.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Many educational researchers conducting studies in non-English speaking settings attempt to report on their project in English to boost their scholarly impact. It requires preparing and presenting translations of data collected from interviews and observations. This paper discusses the process and ethical considerations involved in this invisible methodological phase. The process includes activities prior to data analysis and to its presentation to be undertaken by the bilingual researcher as translator in order to convey participants’ original meanings as well as to establish and fulfil translation ethics. This paper offers strategies to address such issues; the most appropriate translation method for qualitative study; and approaches to address political issues when presenting such data.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Compositional data analysis usually deals with relative information between parts where the total (abundances, mass, amount, etc.) is unknown or uninformative. This article addresses the question of what to do when the total is known and is of interest. Tools used in this case are reviewed and analysed, in particular the relationship between the positive orthant of D-dimensional real space, the product space of the real line times the D-part simplex, and their Euclidean space structures. The first alternative corresponds to data analysis taking logarithms on each component, and the second one to treat a log-transformed total jointly with a composition describing the distribution of component amounts. Real data about total abundances of phytoplankton in an Australian river motivated the present study and are used for illustration.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This article reports on a 6-year study that examined the association between pre-admission variables and field placement performance in an Australian bachelor of social work program (N=463). Very few of the pre-admission variables were found to be significantly associated with performance. These findings and the role of the admissions process are discussed. In addition to the usual academic criteria, the authors urge schools to include a focus on nonacademic criteria during the admissions process and the ongoing educational program.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Participatory evaluation and participatory action research (PAR) are increasingly used in community-based programs and initiatives and there is a growing acknowledgement of their value. These methodologies focus more on knowledge generated and constructed through lived experience than through social science (Vanderplaat 1995). The scientific ideal of objectivity is usually rejected in favour of a holistic approach that acknowledges and takes into account the diverse perspectives, values and interpretations of participants and evaluation professionals. However, evaluation rigour need not be lost in this approach. Increasing the rigour and trustworthiness of participatory evaluations and PAR increases the likelihood that results are seen as credible and are used to continually improve programs and policies.----- Drawing on learnings and critical reflections about the use of feminist and participatory forms of evaluation and PAR over a 10-year period, significant sources of rigour identified include:----- • participation and communication methods that develop relations of mutual trust and open communication----- • using multiple theories and methodologies, multiple sources of data, and multiple methods of data collection----- • ongoing meta-evaluation and critical reflection----- • critically assessing the intended and unintended impacts of evaluations, using relevant theoretical models----- • using rigorous data analysis and reporting processes----- • participant reviews of evaluation case studies, impact assessments and reports.