144 resultados para Multivariate data analysis


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Urban road dust comprises of a range of potentially toxic metal elements and plays a critical role in degrading urban receiving water quality. Hence, assessing the metal composition and concentration in urban road dust is a high priority. This study investigated the variability of metal composition and concentrations in road dust in 4 different urban land uses in Gold Coast, Australia. Samples from 16 road sites were collected and tested for selected 12 metal species. The data set was analyzed using both univariate and multivariate techniques. Outcomes of the data analysis revealed that the metal concentrations in road dust differ considerably within and between different land uses. Iron, aluminum, magnesium and zinc are the most abundant in urban land uses. It was also noted that metal species such as titanium, nickel, copper and zinc have the highest concentrations in industrial land use. The study outcomes revealed that soil and traffic related sources as key sources of metals deposited on road surfaces.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A spatial process observed over a lattice or a set of irregular regions is usually modeled using a conditionally autoregressive (CAR) model. The neighborhoods within a CAR model are generally formed deterministically using the inter-distances or boundaries between the regions. An extension of CAR model is proposed in this article where the selection of the neighborhood depends on unknown parameter(s). This extension is called a Stochastic Neighborhood CAR (SNCAR) model. The resulting model shows flexibility in accurately estimating covariance structures for data generated from a variety of spatial covariance models. Specific examples are illustrated using data generated from some common spatial covariance functions as well as real data concerning radioactive contamination of the soil in Switzerland after the Chernobyl accident.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Environmental monitoring is becoming critical as human activity and climate change place greater pressures on biodiversity, leading to an increasing need for data to make informed decisions. Acoustic sensors can help collect data across large areas for extended periods making them attractive in environmental monitoring. However, managing and analysing large volumes of environmental acoustic data is a great challenge and is consequently hindering the effective utilization of the big dataset collected. This paper presents an overview of our current techniques for collecting, storing and analysing large volumes of acoustic data efficiently, accurately, and cost-effectively.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This project recognized lack of data analysis and travel time prediction on arterials as the main gap in the current literature. For this purpose it first investigated reliability of data gathered by Bluetooth technology as a new cost effective method for data collection on arterial roads. Then by considering the similarity among varieties of daily travel time on different arterial routes, created a SARIMA model to predict future travel time values. Based on this research outcome, the created model can be applied for online short term travel time prediction in future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose Is eccentric hamstring strength and between limb imbalance in eccentric strength, measured during the Nordic hamstring exercise, a risk factor for hamstring strain injury (HSI)? Methods Elite Australian footballers (n=210) from five different teams participated. Eccentric hamstring strength during the Nordic was taken at the commencement and conclusion of preseason training and in season. Injury history and demographic data were also collected. Reports on prospectively occurring HSIs were completed by team medical staff. Relative risk (RR) was determined for univariate data and logistic regression was employed for multivariate data. Results Twenty-eight HSIs were recorded. Eccentric hamstring strength below 256N at the start of preseason and 279N at the end of preseason increased risk of future HSI 2.7 (relative risk, 2.7; 95% confidence interval, 1.3 to 5.5; p = 0.006) and 4.3 fold (relative risk, 4.3; 95% confidence interval, 1.7 to 11.0; p = 0.002) respectively. Between limb imbalance in strength of greater than 10% did not increase the risk of future HSI. Univariate analysis did not reveal a significantly greater relative risk for future HSI in athletes who had sustained a lower limb injury of any kind within the last 12 months. Logistic regression revealed interactions between both athlete age and history of HSI with eccentric hamstring strength, whereby the likelihood of future HSI in older athletes or athletes with a history of HSI was reduced if an athlete had high levels of eccentric strength. Conclusion Low levels of eccentric hamstring strength increased the risk of future HSI. Interaction effects suggest that the additional risk of future HSI associated with advancing age or previous injury was mitigated by higher levels of eccentric hamstring strength.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years, increasing focus has been made on making good business decisions utilizing the product of data analysis. With the advent of the Big Data phenomenon, this is even more apparent than ever before. But the question is how can organizations trust decisions made on the basis of results obtained from analysis of untrusted data? Assurances and trust that data and datasets that inform these decisions have not been tainted by outside agency. This study will propose enabling the authentication of datasets specifically by the extension of the RESTful architectural scheme to include authentication parameters while operating within a larger holistic security framework architecture or model compliant to legislation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE The purpose of this study was to demonstrate the potential of near infrared (NIR) spectroscopy for characterizing the health and degenerative state of articular cartilage based on the components of the Mankin score. METHODS Three models of osteoarthritic degeneration induced in laboratory rats by anterior cruciate ligament (ACL) transection, meniscectomy (MSX), and intra-articular injection of monoiodoacetate (1 mg) (MIA) were used in this study. Degeneration was induced in the right knee joint; each model group consisted of 12 rats (N = 36). After 8 weeks, the animals were euthanized and knee joints were collected. A custom-made diffuse reflectance NIR probe of 5-mm diameter was placed on the tibial and femoral surfaces, and spectral data were acquired from each specimen in the wave number range of 4,000 to 12,500 cm(-1). After spectral data acquisition, the specimens were fixed and safranin O staining (SOS) was performed to assess disease severity based on the Mankin scoring system. Using multivariate statistical analysis, with spectral preprocessing and wavelength selection technique, the spectral data were then correlated to the structural integrity (SI), cellularity (CEL), and matrix staining (SOS) components of the Mankin score for all the samples tested. RESULTS ACL models showed mild cartilage degeneration, MSX models had moderate degeneration, and MIA models showed severe cartilage degenerative changes both morphologically and histologically. Our results reveal significant linear correlations between the NIR absorption spectra and SI (R(2) = 94.78%), CEL (R(2) = 88.03%), and SOS (R(2) = 96.39%) parameters of all samples in the models. In addition, clustering of the samples according to their level of degeneration, with respect to the Mankin components, was also observed. CONCLUSIONS NIR spectroscopic probing of articular cartilage can potentially provide critical information about the health of articular cartilage matrix in early and advanced stages of osteoarthritis (OA). CLINICAL RELEVANCE This rapid nondestructive method can facilitate clinical appraisal of articular cartilage integrity during arthroscopic surgery.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis proposes three novel models which extend the statistical methodology for motor unit number estimation, a clinical neurology technique. Motor unit number estimation is important in the treatment of degenerative muscular diseases and, potentially, spinal injury. Additionally, a recent and untested statistic to enable statistical model choice is found to be a practical alternative for larger datasets. The existing methods for dose finding in dual-agent clinical trials are found to be suitable only for designs of modest dimensions. The model choice case-study is the first of its kind containing interesting results using so-called unit information prior distributions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Galilee and Eromanga basins are sub-basins of the Great Artesian Basin (GAB). In this study, a multivariate statistical approach (hierarchical cluster analysis, principal component analysis and factor analysis) is carried out to identify hydrochemical patterns and assess the processes that control hydrochemical evolution within key aquifers of the GAB in these basins. The results of the hydrochemical assessment are integrated into a 3D geological model (previously developed) to support the analysis of spatial patterns of hydrochemistry, and to identify the hydrochemical and hydrological processes that control hydrochemical variability. In this area of the GAB, the hydrochemical evolution of groundwater is dominated by evapotranspiration near the recharge area resulting in a dominance of the Na–Cl water types. This is shown conceptually using two selected cross-sections which represent discrete groundwater flow paths from the recharge areas to the deeper parts of the basins. With increasing distance from the recharge area, a shift towards a dominance of carbonate (e.g. Na–HCO3 water type) has been observed. The assessment of hydrochemical changes along groundwater flow paths highlights how aquifers are separated in some areas, and how mixing between groundwater from different aquifers occurs elsewhere controlled by geological structures, including between GAB aquifers and coal bearing strata of the Galilee Basin. The results of this study suggest that distinct hydrochemical differences can be observed within the previously defined Early Cretaceous–Jurassic aquifer sequence of the GAB. A revision of the two previously recognised hydrochemical sequences is being proposed, resulting in three hydrochemical sequences based on systematic differences in hydrochemistry, salinity and dominant hydrochemical processes. The integrated approach presented in this study which combines different complementary multivariate statistical techniques with a detailed assessment of the geological framework of these sedimentary basins, can be adopted in other complex multi-aquifer systems to assess hydrochemical evolution and its geological controls.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The importance of a thorough and systematic literature review has long been recognised across academic domains as critical to the foundation of new knowledge and theory evolution. Driven by an exponentially growing body of knowledge in the IS discipline, there has been a recent influx of guidance on how to conduct a literature review. As literature reviews are emerging as a standalone research method in itself, increasingly these method focused guidelines are of great interest, receiving acceptance at top tier IS publication outlets. Nevertheless, the finer details which offer justification for the selected content, and the effective presentation of supporting data has not been widely discussed in these method papers to date. This paper addresses this gap by exploring the concept of ‘literature profiling’ while arguing that it is a key aspect of a comprehensive literature review. The study establishes the importance of profiling for managing aspects such as quality assurance, transparency and the mitigation of selection bias. And then discusses how profiling can provide a valid basis for data analysis based on the attributes of selected literature. In essence, this study has conducted an archival analysis of literature (predominately from the IS domain) to present its main argument; the value for literature profiling, with supporting exemplary illustrations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background The evaluation of the hand function is an essential element within the clinical practice. The usual assessments are focus on the ability to perform activities of daily life. The inclusion of instruments to measure kinematic variables provides a new approach to the assessment. Inertial sensors adapted to the hand could be used as a complementary instrument to the traditional assessment. Material: clinimetric assessment (Upper Limb Functional Index, Quick Dash), antrophometric variables (eight and weight), dynamometry (palm preasure) was taken. Functional analysis was made with Acceleglove system for the right hand and computer system. The glove has six acceleration sensor, one on each finger and another one on the reverse palm. Method Analytic, transversal approach. Ten healthy subject made six task on evaluation table (tripod pinch, lateral pinch and tip pinch, extension grip, spherical grip and power grip). Each task was made and measure three times, the second one was analyze for the results section. A Matlab script was created for the analysis of each movement and detection phase based on module vector. Results The module acceleration vector offers useful information of the hand function. The data analysis obtained during the performance of functional gestures allows to identify five different phases within the movement, three static phase and tow dynamic, each module vector was allied to one task. Conclusion Module vector variables could be used for the analysis of the different task made by the hand. Inertial sensor could be use as a complement for the traditional assessment system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Big data analysis in healthcare sector is still in its early stages when comparing with that of other business sectors due to numerous reasons. Accommodating the volume, velocity and variety of healthcare data Identifying platforms that examine data from multiple sources, such as clinical records, genomic data, financial systems, and administrative systems Electronic Health Record (EHR) is a key information resource for big data analysis and is also composed of varied co-created values. Successful integration and crossing of different subfields of healthcare data such as biomedical informatics and health informatics could lead to huge improvement for the end users of the health care system, i.e. the patients.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The concept of big data has already outperformed traditional data management efforts in almost all industries. Other instances it has succeeded in obtaining promising results that provide value from large-scale integration and analysis of heterogeneous data sources for example Genomic and proteomic information. Big data analytics have become increasingly important in describing the data sets and analytical techniques in software applications that are so large and complex due to its significant advantages including better business decisions, cost reduction and delivery of new product and services [1]. In a similar context, the health community has experienced not only more complex and large data content, but also information systems that contain a large number of data sources with interrelated and interconnected data attributes. That have resulted in challenging, and highly dynamic environments leading to creation of big data with its enumerate complexities, for instant sharing of information with the expected security requirements of stakeholders. When comparing big data analysis with other sectors, the health sector is still in its early stages. Key challenges include accommodating the volume, velocity and variety of healthcare data with the current deluge of exponential growth. Given the complexity of big data, it is understood that while data storage and accessibility are technically manageable, the implementation of Information Accountability measures to healthcare big data might be a practical solution in support of information security, privacy and traceability measures. Transparency is one important measure that can demonstrate integrity which is a vital factor in the healthcare service. Clarity about performance expectations is considered to be another Information Accountability measure which is necessary to avoid data ambiguity and controversy about interpretation and finally, liability [2]. According to current studies [3] Electronic Health Records (EHR) are key information resources for big data analysis and is also composed of varied co-created values [3]. Common healthcare information originates from and is used by different actors and groups that facilitate understanding of the relationship for other data sources. Consequently, healthcare services often serve as an integrated service bundle. Although a critical requirement in healthcare services and analytics, it is difficult to find a comprehensive set of guidelines to adopt EHR to fulfil the big data analysis requirements. Therefore as a remedy, this research work focus on a systematic approach containing comprehensive guidelines with the accurate data that must be provided to apply and evaluate big data analysis until the necessary decision making requirements are fulfilled to improve quality of healthcare services. Hence, we believe that this approach would subsequently improve quality of life.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Increasingly larger scale applications are generating an unprecedented amount of data. However, the increasing gap between computation and I/O capacity on High End Computing machines makes a severe bottleneck for data analysis. Instead of moving data from its source to the output storage, in-situ analytics processes output data while simulations are running. However, in-situ data analysis incurs much more computing resource contentions with simulations. Such contentions severely damage the performance of simulation on HPE. Since different data processing strategies have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. In this paper, we explore and analyze several potential data-analytics placement strategies along the I/O path. To find out the best strategy to reduce data movement in given situation, we propose a flexible data analytics (FlexAnalytics) framework in this paper. Based on this framework, a FlexAnalytics prototype system is developed for analytics placement. FlexAnalytics system enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and visualization, as well as for large-scale data transfer. Two use cases – scientific data compression and remote visualization – have been applied in the study to verify the performance of FlexAnalytics. Experimental results demonstrate that FlexAnalytics framework increases data transition bandwidth and improves the application end-to-end transfer performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Studies have examined the effects of temperature on mortality in a single city, country, or region. However, less evidence is available on the variation in the associations between temperature and mortality in multiple countries, analyzed simultaneously. Methods: We obtained daily data on temperature and mortality in 306 communities from 12 countries/regions (Australia, Brazil, Thailand, China, Taiwan, Korea, Japan, Italy, Spain, United Kingdom, United States, and Canada). Two-stage analyses were used to assess the nonlinear and delayed relation between temperature and mortality. In the first stage, a Poisson regression allowing overdispersion with distributed lag nonlinear model was used to estimate the community-specific temperature-mortality relation. In the second stage, a multivariate meta-analysis was used to pool the nonlinear and delayed effects of ambient temperature at the national level, in each country. Results: The temperatures associated with the lowest mortality were around the 75th percentile of temperature in all the countries/regions, ranging from 66th (Taiwan) to 80th (UK) percentiles. The estimated effects of cold and hot temperatures on mortality varied by community and country. Meta-analysis results show that both cold and hot temperatures increased the risk of mortality in all the countries/regions. Cold effects were delayed and lasted for many days, whereas heat effects appeared quickly and did not last long. Conclusions: People have some ability to adapt to their local climate type, but both cold and hot temperatures are still associated with increased risk of mortality. Public health strategies to alleviate the impact of ambient temperatures are important, in particular in the context of climate change.