955 resultados para Diagram efforts
Resumo:
This study is conducted within the IS-Impact Research Track at Queensland University of Technology (QUT). The goal of the IS-Impact Track is, "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable et al, 2006). IS-Impact is defined as "a measure at a point in time, of the stream of net benefits from the IS [Information System], to date and anticipated, as perceived by all key-user-groups" (Gable Sedera and Chan, 2008). Track efforts have yielded the bicameral IS-Impact measurement model; the "impact" half includes Organizational-Impact and Individual-Impact dimensions; the "quality" half includes System-Quality and Information-Quality dimensions. The IS-Impact model, by design, is intended to be robust, simple and generalisable, to yield results that are comparable across time, stakeholders, different systems and system contexts. The model and measurement approach employs perceptual measures and an instrument that is relevant to key stakeholder groups, thereby enabling the combination or comparison of stakeholder perspectives. Such a validated and widely accepted IS-Impact measurement model has both academic and practical value. It facilitates systematic operationalisation of a main dependent variable in research (IS-Impact), which can also serve as an important independent variable. For IS management practice it provides a means to benchmark and track the performance of information systems in use. From examination of the literature, the study proposes that IS-Impact is an Analytic Theory. Gregor (2006) defines Analytic Theory simply as theory that ‘says what is’, base theory that is foundational to all other types of theory. The overarching research question thus is "Does IS-Impact positively manifest the attributes of Analytic Theory?" In order to address this question, we must first answer the question "What are the attributes of Analytic Theory?" The study identifies the main attributes of analytic theory as: (1) Completeness, (2) Mutual Exclusivity, (3) Parsimony, (4) Appropriate Hierarchy, (5) Utility, and (6) Intuitiveness. The value of empirical research in Information Systems is often assessed along the two main dimensions - rigor and relevance. Those Analytic Theory attributes associated with the ‘rigor’ of the IS-Impact model; namely, completeness, mutual exclusivity, parsimony and appropriate hierarchy, have been addressed in prior research (e.g. Gable et al, 2008). Though common tests of rigor are widely accepted and relatively uniformly applied (particularly in relation to positivist, quantitative research), attention to relevance has seldom been given the same systematic attention. This study assumes a mainly practice perspective, and emphasises the methodical evaluation of the Analytic Theory ‘relevance’ attributes represented by the Utility and Intuitiveness of the IS-Impact model. Thus, related research questions are: "Is the IS-Impact model intuitive to practitioners?" and "Is the IS-Impact model useful to practitioners?" March and Smith (1995), identify four outputs of Design Science: constructs, models, methods and instantiations (Design Science research may involve one or more of these). IS-Impact can be viewed as a design science model, composed of Design Science constructs (the four IS-Impact dimensions and the two model halves), and instantiations in the form of management information (IS-Impact data organised and presented for management decision making). In addition to methodically evaluating the Utility and Intuitiveness of the IS-Impact model and its constituent constructs, the study aims to also evaluate the derived management information. Thus, further research questions are: "Is the IS-Impact derived management information intuitive to practitioners?" and "Is the IS-Impact derived management information useful to practitioners? The study employs a longitudinal design entailing three surveys over 4 years (the 1st involving secondary data) of the Oracle-Financials application at QUT, interspersed with focus groups involving senior financial managers. The study too entails a survey of Financials at four other Australian Universities. The three focus groups respectively emphasise: (1) the IS-Impact model, (2) the 2nd survey at QUT (descriptive), and (3) comparison across surveys within QUT, and between QUT and the group of Universities. Aligned with the track goal of producing IS-Impact scores that are highly comparable, the study also addresses the more specific utility-related questions, "Is IS-Impact derived management information a useful comparator across time?" and "Is IS-Impact derived management information a useful comparator across universities?" The main contribution of the study is evidence of the utility and intuitiveness of IS-Impact to practice, thereby further substantiating the practical value of the IS-Impact approach; and also thereby motivating continuing and further research on the validity of IS-Impact, and research employing the ISImpact constructs in descriptive, predictive and explanatory studies. The study also has value methodologically as an example of relatively rigorous attention to relevance. A further key contribution is the clarification and instantiation of the full set of analytic theory attributes.
Resumo:
This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.
Resumo:
Nature Refuges encompass the second largest extent of protected area estate in Queensland. Major problems exist in the data capture, map presentation, data quality and integrity of these boundaries. The spatial accuracies/inaccuracies of the Nature Refuge administrative boundaries directly influence the ability to preserve valuable ecosystems by challenging negative environmental impacts on these properties. This research work is about supporting the Nature Refuge Programs efforts to secure Queensland’s natural and cultural values on private land by utilising GIS and its advanced functionalities. The research design organizes and enters Queensland’s Nature Refuge boundaries into a spatial environment. Survey quality data collection techniques such as the Global Positioning Systems (GPS) are investigated to capture Nature Refuge boundary information. Using the concepts of map communication GIS Cartography is utilised for the protected area plan design. New spatial datasets are generated facilitating the effectiveness of investigative data analysis. The geodatabase model developed by this study adds rich GIS behaviour providing the capability to store, query, and manipulate geographic information. It provides the ability to leverage data relationships and enforces topological integrity creating savings in customization and productivity. The final phase of the research design incorporates the advanced functions of ArcGIS. These functions facilitate building spatial system models. The geodatabase and process models developed by this research can be easily modified and the data relating to mining can be replaced by other negative environmental impacts affecting the Nature Refuges. Results of the research are presented as graphs and maps providing visual evidence supporting the usefulness of GIS as means for capturing, visualising and enhancing spatial quality and integrity of Nature Refuge boundaries.
Resumo:
In order to estimate the safety impact of roadway interventions engineers need to collect, analyze, and interpret the results of carefully implemented data collection efforts. The intent of these studies is to develop Accident Modification Factors (AMF's), which are used to predict the safety impact of various road safety features at other locations or in upon future enhancements. Models are typically estimated to estimate AMF's for total crashes, but can and should be estimated for crash outcomes as well. This paper first describes data collected with the intent estimate AMF's for rural intersections in the state of Georgia within the United Sates. Modeling results of crash prediction models for the crash outcomes: angle, head-on, rear-end, sideswipe (same direction and opposite direction) and pedestrian-involved crashes are then presented and discussed. The analysis reveals that factors such as the Annual Average Daily Traffic (AADT), the presence of turning lanes, and the number of driveways have a positive association with each type of crash, while the median width and the presence of lighting are negatively associated with crashes. The model covariates are related to crash outcome in different ways, suggesting that crash outcomes are associated with different pre-crash conditions.
Resumo:
Persistent use of safety restraints prevents deaths and reduces the severity and number of injuries resulting from motor vehicle crashes. However, safety-restraint use rates in the United States have been below those of other nations with safety-restraint enforcement laws. With a better understanding of the relationship between safety-restraint law enforcement and safety-restraint use, programs can be implemented to decrease the number of deaths and injuries resulting from motor vehicle crashes. Does safety-restraint use increase as enforcement increases? Do motorists increase their safety-restraint use in response to the general presence of law enforcement or to targeted law enforcement efforts? Does a relationship between enforcement and restraint use exist at the countywide level? A logistic regression model was estimated by using county-level safety-restraint use data and traffic citation statistics collected in 13 counties within the state of Florida in 1997. The model results suggest that safety-restraint use is positively correlated with enforcement intensity, is negatively correlated with safety-restraint enforcement coverage (in lanemiles of enforcement coverage), and is greater in urban than rural areas. The quantification of these relationships may assist Florida and other law enforcement agencies in raising safety-restraint use rates by allocating limited funds more efficiently either by allocating additional time for enforcement activities of the existing force or by increasing enforcement staff. In addition, the research supports a commonsense notion that enforcement activities do result in behavioral response.
Resumo:
Purpose – The paper aims to explore the key competitiveness indicators (KCIs) that provide the guidelines for helping new real estate developers (REDs) achieve competitiveness during their inception stage in which the organisations start their business. Design/methodology/approach – The research was conducted using a combination of various methods. A literature review was undertaken to provide a proper theoretical understanding of organisational competitiveness within RED's activities and developed a framework of competitiveness indicators (CIs) for REDs. The Delphi forecasting method is employed to investigate a group of 20 experts' perception on the relative importance between CIs. Findings – The results show that the KCIs of new REDs are capital operation capability, entrepreneurship, land reserve capability, high sales revenue from the first real estate development project, and innovation capability. Originality/value – The five KCIs of new REDs are new. In practical terms, the examination of these KCIs would help the business managers of new REDs to effectively plan their business by focusing their efforts on these key indicators. The KCIs can also help REDs provide theoretical constructs of the knowledge base on organisational competitiveness from a dynamic perspective, and assist in providing valuable experiences and in formulating feasible strategies for survival and growth.
Resumo:
Key resource areas (KRAs), defined as dry season foraging zones for herbivores, were studied relative to the more extensive outlying rangeland areas (non-KRAs) in Kenya. Field surveys with pastoralists, ranchers, scientists and government officials delineated KRAs on the ground. Identified KRAs were mapped based on global positioning and local experts' information on KRAs accessibility and ecological attributes. Using the map of known KRAs and non-KRAs, we examined characteristics of soils, climate, topography, land use/cover attributes at KRAs relative to non-KRAs. How and why do some areas (KRAs) support herbivores during droughts when forage is scarce in other areas of the landscape? We hypothesized that KRAs have fundamental ecological and socially determined attributes that enable them to provide forage during critical times and we sought to characterize some of those attributes in this study. At the landscape level, KRAs took different forms based on forage availability during the dry season but generally occurred in locations of the landscape with aseasonal water availability and/or difficult to access areas during wet season forage abundance. Greenness trends for KRAs versus non-KRAs were evaluated with a 22-year dataset of Normalized Difference Vegetation Index (NDVI). Field surveys of KRAs provided qualitative information on KRAs as dry season foraging zones. At the scale of the study, soil attributes did not significantly differ for KRAs compared to non-KRAs. Slopes of KRA were generally steeper compared to non-KRAs and elevation was higher at KRAs. Field survey respondents indicated that animals and humans generally avoid difficult to access hilly areas using them only when all other easily accessible rangeland is depleted of forage during droughts. Understanding the nature of KRAs will support identification, protection and restoration of critical forage hotspots for herbivores by strengthening rangeland inventory, monitoring, policy formulation, and conservation efforts to improve habitats and human welfare. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The design and implementation of a high-power (2 MW peak) vector control drive is described. The inverter switching frequency is low, resulting in high-harmonic-content current waveforms. A block diagram of the physical system is given, and each component is described in some detail. The problem of commanded slip noise sensitivity, inherent in high-power vector control drives, is discussed, and a solution is proposed. Results are given which demonstrate the successful functioning of the system
Resumo:
Forces of demand and supply are changing the dynamics of the higher education market. Transformation of institutions of higher learning into competitive enterprise is underway. Higher education institutions are seemingly under intense pressure to create value and focus their efforts and scarce funds on activities that drive up value for their respective customers and other stakeholders. Porter’s generic ‘value chain’ model for creating value requires that the activities of an organization be segregated in to discrete components for value chain analysis to be performed. Recent trends in higher education make such segregation possible. Therefore, it is proposed that the academic process can be unbundled into discrete components which have well developed measures. A reconfigured value chain for higher education, with its own value drivers and critical internal linkages is also proposed in this paper.
Resumo:
The Government of the Hong Kong SAR sponsored a report investigating the Hong Kong construction inudstry and published the investigating committee's findings in 2001 (HK CIRC 2001). Since then the Provisional Construction Industry Coordination Board (PCICB), and its successor, the Construction Industry Council (CIC), also set up by the Government, has made progress with the necessary reforms. Now that seven years have passed, it is time for an independent evaluation of the impact of the CIRC initiative in order to assist the CIC and the Government decision-makers in refining the efforts to improve the industry's performance. This paper reports on the interim results of a study that seeks to provide such an evaluation.
Resumo:
The proposals arising from the agreement reached between the Rudd government and the States and Territories (except Western Australia) in April 2010 represent the most fundamental realignment of health responsibilities since the creation of Medicare in 1984. They will change the health system, and the structures that will craft its future direction and design. These proposals will have a significant impact on Emergency Medicine; an impact from not only the system-wide effects of the proposals but also those that derive from the specific recommendations to create an activity-based funding mechanism for EDs, to implement the four hour rule and to develop a performance indicator framework for EDs. The present paper will examine the potential impact of the proposals on Emergency Medicine to inform those who work within the system and to help guide further developments. More work is required to better evaluate the proposals and to guide the design and development of specific reform instruments. Any such efforts should be based upon a proper analysis of the available evidence, and a structured approach to research and development so as to deliver on improved services to the community, and on improved quality and safety of emergency medical care.
Resumo:
Predicting safety on roadways is standard practice for road safety professionals and has a corresponding extensive literature. The majority of safety prediction models are estimated using roadway segment and intersection (microscale) data, while more recently efforts have been undertaken to predict safety at the planning level (macroscale). Safety prediction models typically include roadway, operations, and exposure variables—factors known to affect safety in fundamental ways. Environmental variables, in particular variables attempting to capture the effect of rain on road safety, are difficult to obtain and have rarely been considered. In the few cases weather variables have been included, historical averages rather than actual weather conditions during which crashes are observed have been used. Without the inclusion of weather related variables researchers have had difficulty explaining regional differences in the safety performance of various entities (e.g. intersections, road segments, highways, etc.) As part of the NCHRP 8-44 research effort, researchers developed PLANSAFE, or planning level safety prediction models. These models make use of socio-economic, demographic, and roadway variables for predicting planning level safety. Accounting for regional differences - similar to the experience for microscale safety models - has been problematic during the development of planning level safety prediction models. More specifically, without weather related variables there is an insufficient set of variables for explaining safety differences across regions and states. Furthermore, omitted variable bias resulting from excluding these important variables may adversely impact the coefficients of included variables, thus contributing to difficulty in model interpretation and accuracy. This paper summarizes the results of an effort to include weather related variables, particularly various measures of rainfall, into accident frequency prediction and the prediction of the frequency of fatal and/or injury degree of severity crash models. The purpose of the study was to determine whether these variables do in fact improve overall goodness of fit of the models, whether these variables may explain some or all of observed regional differences, and identifying the estimated effects of rainfall on safety. The models are based on Traffic Analysis Zone level datasets from Michigan, and Pima and Maricopa Counties in Arizona. Numerous rain-related variables were found to be statistically significant, selected rain related variables improved the overall goodness of fit, and inclusion of these variables reduced the portion of the model explained by the constant in the base models without weather variables. Rain tends to diminish safety, as expected, in fairly complex ways, depending on rain frequency and intensity.
Resumo:
Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes. Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes.
Resumo:
It is important to examine the nature of the relationships between roadway, environmental, and traffic factors and motor vehicle crashes, with the aim to improve the collective understanding of causal mechanisms involved in crashes and to better predict their occurrence. Statistical models of motor vehicle crashes are one path of inquiry often used to gain these initial insights. Recent efforts have focused on the estimation of negative binomial and Poisson regression models (and related deviants) due to their relatively good fit to crash data. Of course analysts constantly seek methods that offer greater consistency with the data generating mechanism (motor vehicle crashes in this case), provide better statistical fit, and provide insight into data structure that was previously unavailable. One such opportunity exists with some types of crash data, in particular crash-level data that are collected across roadway segments, intersections, etc. It is argued in this paper that some crash data possess hierarchical structure that has not routinely been exploited. This paper describes the application of binomial multilevel models of crash types using 548 motor vehicle crashes collected from 91 two-lane rural intersections in the state of Georgia. Crash prediction models are estimated for angle, rear-end, and sideswipe (both same direction and opposite direction) crashes. The contributions of the paper are the realization of hierarchical data structure and the application of a theoretically appealing and suitable analysis approach for multilevel data, yielding insights into intersection-related crashes by crash type.
Resumo:
This paper addresses the problem of constructing consolidated business process models out of collections of process models that share common fragments. The paper considers the construction of unions of multiple models (called merged models) as well as intersections (called digests). Merged models are intended for analysts who wish to create a model that subsumes a collection of process models - typically representing variants of the same underlying process - with the aim of replacing the variants with the merged model. Digests, on the other hand, are intended for analysts who wish to identify the most recurring fragments across a collection of process models, so that they can focus their efforts on optimizing these fragments. The paper presents an algorithm for computing merged models and an algorithm for extracting digests from a merged model. The merging and digest extraction algorithms have been implemented and tested against collections of process models taken from multiple application domains. The tests show that the merging algorithm produces compact models and scales up to process models containing hundreds of nodes. Furthermore, a case study conducted in a large insurance company has demonstrated the usefulness of the merging and digest extraction operators in a practical setting.