540 resultados para predictive models
Resumo:
Organizations from every industry sector seek to enhance their business performance and competitiveness through the deployment of contemporary information systems (IS), such as Enterprise Systems (ERP). Investments in ERP are complex and costly, attracting scrutiny and pressure to justify their cost. Thus, IS researchers highlight the need for systematic evaluation of information system success, or impact, which has resulted in the introduction of varied models for evaluating information systems. One of these systematic measurement approaches is the IS-Impact Model introduced by a team of researchers at Queensland University of technology (QUT) (Gable, Sedera, & Chan, 2008). The IS-Impact Model is conceptualized as a formative, multidimensional index that consists of four dimensions. Gable et al. (2008) define IS-Impact as "a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups" (p.381). The IT Evaluation Research Program (ITE-Program) at QUT has grown the IS-Impact Research Track with the central goal of conducting further studies to enhance and extend the IS-Impact Model. The overall goal of the IS-Impact research track at QUT is "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable, 2009). In order to achieve that, the IS-Impact research track advocates programmatic research having the principles of tenacity, holism, and generalizability through extension research strategies. This study was conducted within the IS-Impact Research Track, to further generalize the IS-Impact Model by extending it to the Saudi Arabian context. According to Hofsted (2012), the national culture of Saudi Arabia is significantly different from the Australian national culture making the Saudi Arabian culture an interesting context for testing the external validity of the IS-Impact Model. The study re-visits the IS-Impact Model from the ground up. Rather than assume the existing instrument is valid in the new context, or simply assess its validity through quantitative data collection, the study takes a qualitative, inductive approach to re-assessing the necessity and completeness of existing dimensions and measures. This is done in two phases: Exploratory Phase and Confirmatory Phase. The exploratory phase addresses the first research question of the study "Is the IS-Impact Model complete and able to capture the impact of information systems in Saudi Arabian Organization?". The content analysis, used to analyze the Identification Survey data, indicated that 2 of the 37 measures of the IS-Impact Model are not applicable for the Saudi Arabian Context. Moreover, no new measures or dimensions were identified, evidencing the completeness and content validity of the IS-Impact Model. In addition, the Identification Survey data suggested several concepts related to IS-Impact, the most prominent of which was "Computer Network Quality" (CNQ). The literature supported the existence of a theoretical link between IS-Impact and CNQ (CNQ is viewed as an antecedent of IS-Impact). With the primary goal of validating the IS-Impact model within its extended nomological network, CNQ was introduced to the research model. The Confirmatory Phase addresses the second research question of the study "Is the Extended IS-Impact Model Valid as a Hierarchical Multidimensional Formative Measurement Model?". The objective of the Confirmatory Phase was to test the validity of IS-Impact Model and CNQ Model. To achieve that, IS-Impact, CNQ, and IS-Satisfaction were operationalized in a survey instrument, and then the research model was assessed by employing the Partial Least Squares (PLS) approach. The CNQ model was validated as a formative model. Similarly, the IS-Impact Model was validated as a hierarchical multidimensional formative construct. However, the analysis indicated that one of the IS-Impact Model indicators was insignificant and can be removed from the model. Thus, the resulting Extended IS-Impact Model consists of 4 dimensions and 34 measures. Finally, the structural model was also assessed against two aspects: explanatory and predictive power. The analysis revealed that the path coefficient between CNQ and IS-Impact is significant with t-value= (4.826) and relatively strong with â = (0.426) with CNQ explaining 18% of the variance in IS-Impact. These results supported the hypothesis that CNQ is antecedent of IS-Impact. The study demonstrates that the quality of Computer Network affects the quality of the Enterprise System (ERP) and consequently the impacts of the system. Therefore, practitioners should pay attention to the Computer Network quality. Similarly, the path coefficient between IS-Impact and IS-Satisfaction was significant t-value = (17.79) and strong â = (0.744), with IS-Impact alone explaining 55% of the variance in Satisfaction, consistent with results of the original IS-Impact study (Gable et al., 2008). The research contributions include: (a) supporting the completeness and validity of IS-Impact Model as a Hierarchical Multi-dimensional Formative Measurement Model in the Saudi Arabian context, (b) operationalizing Computer Network Quality as conceptualized in the ITU-T Recommendation E.800 (ITU-T, 1993), (c) validating CNQ as a formative measurement model and as an antecedent of IS Impact, and (d) conceptualizing and validating IS-Satisfaction as a reflective measurement model and as an immediate consequence of IS Impact. The CNQ model provides a framework to perceptually measure Computer Network Quality from multiple perspectives. The CNQ model features an easy-to-understand, easy-to-use, and economical survey instrument.
Resumo:
Determining the properties and integrity of subchondral bone in the developmental stages of osteoarthritis, especially in a form that can facilitate real-time characterization for diagnostic and decision-making purposes, is still a matter for research and development. This paper presents relationships between near infrared absorption spectra and properties of subchondral bone obtained from 3 models of osteoarthritic degeneration induced in laboratory rats via: (i) menisectomy (MSX); (ii) anterior cruciate ligament transaction (ACL); and (iii) intra-articular injection of mono-ido-acetate (1 mg) (MIA), in the right knee joint, with 12 rats per model group (N = 36). After 8 weeks, the animals were sacrificed and knee joints were collected. A custom-made diffuse reflectance NIR probe of diameter 5 mm was placed on the tibial surface and spectral data were acquired from each specimen in the wavenumber range 4000–12 500 cm− 1. After spectral acquisition, micro computed tomography (micro-CT) was performed on the samples and subchondral bone parameters namely: bone volume (BV) and bone mineral density (BMD) were extracted from the micro-CT data. Statistical correlation was then conducted between these parameters and regions of the near infrared spectra using multivariate techniques including principal component analysis (PCA), discriminant analysis (DA), and partial least squares (PLS) regression. Statistically significant linear correlations were found between the near infrared absorption spectra and subchondral bone BMD (R2 = 98.84%) and BV (R2 = 97.87%). In conclusion, near infrared spectroscopic probing can be used to detect, qualify and quantify changes in the composition of the subchondral bone, and could potentially assist in distinguishing healthy from OA bone as demonstrated with our laboratory rat models.
Resumo:
Three dimensional cellular models that mimic disease are being increasingly investigated and have opened an exciting new research area into understanding pathomechanisms. The advantage of 3D in vitro disease models is that they allow systematic and in-depth studies of physiological and pathophysiological processes with less costs and ethical concerns that have arisen with animal models. The purpose of the 3D approach is to allow crosstalk between cells and microenvironment, and with cues from the microenvironment, cells can assemble their niche similar to in vivo conditions. The use of 3D models for mimicking disease processes such as cancer, osteoarthritis etc., is only emerging and allows multidisciplinary teams consisting of tissue engineers, biologist biomaterial scientists and clinicians to work closely together. While in vitro systems require rigorous testing before they can be considered as replicates of the in vivo model, major steps have been made, suggesting that they will become powerful tools for studying physiological and pathophysiological processes. This paper aims to summarize some of the existing 3D models and proposes a novel 3D model of the eye structures that are involved in the most common cause of blindness in the Western World, namely age-related macular degeneration (AMD).
Resumo:
Identifying the design features that impact construction is essential to developing cost effective and constructible designs. The similarity of building components is a critical design feature that affects method selection, productivity, and ultimately construction cost and schedule performance. However, there is limited understanding of what constitutes similarity in the design of building components and limited computer-based support to identify this feature in a building product model. This paper contributes a feature-based framework for representing and reasoning about component similarity that builds on ontological modelling, model-based reasoning and cluster analysis techniques. It describes the ontology we developed to characterize component similarity in terms of the component attributes, the direction, and the degree of variation. It also describes the generic reasoning process we formalized to identify component similarity in a standard product model based on practitioners' varied preferences. The generic reasoning process evaluates the geometric, topological, and symbolic similarities between components, creates groupings of similar components, and quantifies the degree of similarity. We implemented this reasoning process in a prototype cost estimating application, which creates and maintains cost estimates based on a building product model. Validation studies of the prototype system provide evidence that the framework is general and enables a more accurate and efficient cost estimating process.
Resumo:
Confusion exists as to the age of the Abor Volcanics of NE India. Some consider the unit to have been emplaced in the Early Permian, others the Early Eocene, a difference of ∼230 million years. The divergence in opinion is significant because fundamentally different models explaining the geotectonic evolution of India depend on the age designation of the unit. Paleomagnetic data reported here from several exposures in the type locality of the formation in the lower Siang Valley indicate that steep dipping primary magnetizations (mean = 72.7 ± 6.2°, equating to a paleo-latitude of 58.1°) are recorded in the formation. These are only consistent with the unit being of Permian age, possibly Artinskian based on a magnetostratigraphic argument. Plate tectonic models for this time consistently show the NE corner of the sub-continent >50°S; in the Early Eocene it was just north of the equator, which would have resulted in the unit recording shallow directions. The mean declination is counter-clockwise rotated by ∼94°, around half of which can be related to the motion of the Indian block; the remainder is likely due local Himalayan-age thrusting in the Eastern Syntaxis. Several workers have correlated the Abor Volcanics with broadly coeval mafic volcanic suites in Oman, NE Pakistan–NW India and southern Tibet–Nepal, which developed in response to the Cimmerian block peeling-off eastern Gondwana in the Early-Middle Permian, but we believe there are problems with this model. Instead, we suggest that the Abor basalts relate to India–Antarctica/India–Australia extension that was happening at about the same time. Such an explanation best accommodates the relevant stratigraphical and structural data (present-day position within the Himalayan thrust stack), as well as the plate tectonic model for Permian eastern Gondwana.
Resumo:
Particulate matter is common in our environment and has been linked to human health problems particularly in the ultrafine size range. A range of chemical species have been associated with particulate matter and of special concern are the hazardous chemicals that can accentuate health problems. If the sources of such particles can be identified then strategies can be developed for the reduction of air pollution and consequently, the improvement of the quality of life. In this investigation, particle number size distribution data and the concentrations of chemical species were obtained at two sites in Brisbane, Australia. Source apportionment was used to determine the sources (or factors) responsible for the particle size distribution data. The apportionment was performed by Positive Matrix Factorisation (PMF) and Principal Component Analysis/Absolute Principal Component Scores (PCA/APCS), and the results were compared with information from the gaseous chemical composition analysis. Although PCA/APCS resolved more sources, the results of the PMF analysis appear to be more reliable. Six common sources identified by both methods include: traffic 1, traffic 2, local traffic, biomass burning, and two unassigned factors. Thus motor vehicle related activities had the most impact on the data with the average contribution from nearly all sources to the measured concentrations higher during peak traffic hours and weekdays. Further analyses incorporated the meteorological measurements into the PMF results to determine the direction of the sources relative to the measurement sites, and this indicated that traffic on the nearby road and intersection was responsible for most of the factors. The described methodology which utilised a combination of three types of data related to particulate matter to determine the sources could assist future development of particle emission control and reduction strategies.
Resumo:
The need for a house rental model in Townsville, Australia is addressed. Models developed for predicting house rental levels are described. An analytical model is built upon a priori selected variables and parameters of rental levels. Regression models are generated to provide a comparison to the analytical model. Issues in model development and performance evaluation are discussed. A comparison of the models indicates that the analytical model performs better than the regression models.
Resumo:
This report is the second deliverable of the Real Time and Predictive Traveller Information project and the first deliverable of the Freeway Travel Time Information sub-project in the Integrated Traveller Information research Domain of the Smart Transport Research Centre. The primary objective of the Freeway Travel Time Information sub-project is to develop algorithms for real-time travel time estimation and prediction models for Freeway traffic. The objective of this report is to review the literature pertaining to travel time estimation and prediction models for freeway traffic.
Resumo:
This report is the fourth deliverable of the Real Time and Predictive Traveller Information project and the first deliverable of the Arterial Travel Time Information sub-project in the Integrated Traveller Information research Domain of the Smart Transport Research Centre. The primary objective of the Arterial Travel Time Information sub-project is to develop algorithms for real-time travel time estimation and prediction models for arterial traffic. The objective of this report is to review the literature pertaining to travel time estimation and prediction models for arterial traffic.
Resumo:
This report is the eight deliverable of the Real Time and Predictive Traveller Information project and the third deliverable of the Arterial Travel Time Information sub-project in the Integrated Traveller Information research Domain of the Smart Transport Research Centre. The primary objective of the Arterial Travel Time Information sub-project is to develop algorithms for real-time travel time estimation and prediction models for arterial traffic. Brisbane arterial network is highly equipped with Bluetooth MAC Scanners, which can provide travel time information. Literature is limited with the knowledge on the Bluetooth protocol based data acquisition process and accuracy and reliability of the analysis performed using the data. This report expands the body of knowledge surrounding the use of data from Bluetooth MAC Scanner (BMS) as a complementary traffic data source. A multi layer simulation model named Traffic and Communication Simulation (TCS) is developed. TCS is utilised to model the theoretical properties of the BMS data and analyse the accuracy and reliability of travel time estimation using the BMS data.
Resumo:
Automated process discovery techniques aim at extracting models from information system logs in order to shed light into the business processes supported by these systems. Existing techniques in this space are effective when applied to relatively small or regular logs, but otherwise generate large and spaghetti-like models. In previous work, trace clustering has been applied in an attempt to reduce the size and complexity of automatically discovered process models. The idea is to split the log into clusters and to discover one model per cluster. The result is a collection of process models -- each one representing a variant of the business process -- as opposed to an all-encompassing model. Still, models produced in this way may exhibit unacceptably high complexity. In this setting, this paper presents a two-way divide-and-conquer process discovery technique, wherein the discovered process models are split on the one hand by variants and on the other hand hierarchically by means of subprocess extraction. The proposed technique allows users to set a desired bound for the complexity of the produced models. Experiments on real-life logs show that the technique produces collections of models that are up to 64% smaller than those extracted under the same complexity bounds by applying existing trace clustering techniques.
Resumo:
Self reported driving behaviour in the occupational driving context has typically been measured through scales adapted from the general driving population (i.e. the Manchester Driver Behaviour Questionnaire (DBQ)). However, research suggests that occupational driving is influenced by unique factors operating within the workplace environment, and thus, a behavioural scale should reflect those behaviours prevalent and unique within the driving context. To overcome this limitation, developed the Occupational Driver Behaviour Questionnaire (ODBQ) which utilises a relevant theoretical model to assess the impact of the broader workplace context on driving behaviour. Although the theoretical argument has been established, research is yet to examine whether the ODBQ or the DBQ is a more sensitive measure of the workplace context. As such, this paper identifies selected organisational factors (i.e. safety climate and role overload) as predictors of the DBQ and the ODBQ and compares the relative predictive value in both models. In undertaking this task, 248 occupational drivers were recruited from a community-oriented nursing population. As predicted, hierarchical regression analyses revealed that the organisational factors accounted for a significantly greater proportion of variance in the ODBQ than the DBQ. These findings offer a number of practical and theoretical applications for occupational driving practice and future research.
Resumo:
The current study examined the structure of the volunteer functions inventory within a sample of older individuals (N = 187). The career items were replaced with items examining the concept of continuity of work, a potentially more useful and relevant concept for this population. Factor analysis supported a four factor solution, with values, social and continuity emerging as single factors and enhancement and protective items loading together on a single factor. Understanding items did not load highly on any factor. The values and continuity functions were the only dimensions to emerge as predictors of intention to volunteer. This research has important implications for understanding the motivation of older adults to engage in contemporary volunteering settings.