479 resultados para Topological Construct
Resumo:
Objectives: To develop and test preliminary reliability and validity of a Self-Efficacy Questionnaire for Chinese Family Caregivers (SEQCFC). Methods: A cross-sectional survey of 196 family caregivers (CGs) of people with dementia (CGs) was conducted to determine the factor structure of a SEQCFC of people with dementia. Following factor analyses, preliminary testing was performed, including internal consistency, 4-week test retest reliability, and construct and convergent validity. Results: Factor analyses with direct oblimin rotation were performed. Eight items were removed and five subscales(selfefficacy for gathering information about treatment, symptoms and health care; obtaining support; responding to behaviour disturbances; managing household, personal and medical care; and managing distress associated with caregiving) were identified. The Cronbach’s alpha coefficients for the whole scale and for each subscale were all over 0.80. The 4-week testretest reliabilities for the whole scale and for each subscale ranged from 0.64 to 0.85. The convergent validity was acceptable. Conclusions: Evidence for the preliminary testing of the SEQCFC was encouraging. A future follow-up study using confirmatory factor analysis with a new sample from different recruitment centres in Shanghai will be conducted. Future psychometric property testings of the questionnaire will be required for CGs from other regions of mainland China.
Resumo:
A complex attack is a sequence of temporally and spatially separated legal and illegal actions each of which can be detected by various IDS but as a whole they constitute a powerful attack. IDS fall short of detecting and modeling complex attacks therefore new methods are required. This paper presents a formal methodology for modeling and detection of complex attacks in three phases: (1) we extend basic attack tree (AT) approach to capture temporal dependencies between components and expiration of an attack, (2) using enhanced AT we build a tree automaton which accepts a sequence of actions from input message streams from various sources if there is a traversal of an AT from leaves to root, and (3) we show how to construct an enhanced parallel automaton that has each tree automaton as a subroutine. We use simulation to test our methods, and provide a case study of representing attacks in WLANs.
Resumo:
Many construction industry decision-makers believe there is a lack of off-site manufacture (OSM) adoption for non-residential construction in Australia. Identification of construction business process was considered imperative in order to assist decision-makers to increase OSM utilisation. The premise that domain knowledge can be re-used to provide an intervention point in the construction process led a team of researchers to construct simple base-line process models for the complete construction process, segmented into six phases. Sixteen domain knowledge industry experts were asked to review the construction phase base-line models to answer the question “Where in the process illustrated by this base-line model phase is an OSM task?”. Through an iterative and generative process a number of off-site manufacture intervention points were identified and integrated into the process models. The re-use of industry expert domain knowledge provided suggestions for new ways to do basic tasks thus facilitating changes to current practice. It is expected that implementation of the new processes will lead to systemic industry change and thus a growth in productivity due to increased adoption of OSM.
Resumo:
From human biomonitoring data that are increasingly collected in the United States, Australia, and in other countries from large-scale field studies, we obtain snap-shots of concentration levels of various persistent organic pollutants (POPs) within a cross section of the population at different times. Not only can we observe the trends within this population with time, but we can also gain information going beyond the obvious time trends. By combining the biomonitoring data with pharmacokinetic modeling, we can re-construct the time-variant exposure to individual POPs, determine their intrinsic elimination half-lives in the human body, and predict future levels of POPs in the population. Different approaches have been employed to extract information from human biomonitoring data. Pharmacokinetic (PK) models were combined with longitudinal data1, with single2 or multiple3 average concentrations of a cross-sectional data (CSD), or finally with multiple CSD with or without empirical exposure data4. In the latter study, for the first time, the authors based their modeling outputs on two sets of CSD and empirical exposure data, which made it possible that their model outputs were further constrained due to the extensive body of empirical measurements. Here we use a PK model to analyze recent levels of PBDE concentrations measured in the Australian population. In this study, we are able to base our model results on four sets5-7 of CSD; we focus on two PBDE congeners that have been shown3,5,8-9 to differ in intake rates and half-lives with BDE-47 being associated with high intake rates and a short half-life and BDE-153 with lower intake rates and a longer half-life. By fitting the model to PBDE levels measured in different age groups in different years, we determine the level of intake of BDE-47 and BDE-153, as well as the half-lives of these two chemicals in the Australian population.
Resumo:
Background: The 30-item USDI is a self-report measure that assesses depressive symptoms among university students. It consists of three correlated three factors: Lethargy, Cognitive-Emotional and Academic motivation. The current research used confirmatory factor analysis to asses construct validity and determine whether the original factor structure would be replicated in a different sample. Psychometric properties were also examined. Method: Participants were 1148 students (mean age 22.84 years, SD = 6.85) across all faculties from a large Australian metropolitan university. Students completed a questionnaire comprising of the USDI, the Depression Anxiety Stress Scale (DASS) and Life Satisfaction Scale (LSS). Results: The three correlated factor model was shown to be an acceptable fit to the data, indicating sound construct validity. Internal consistency of the scale was also demonstrated to be sound, with high Cronbach Alpha values. Temporal stability of the scale was also shown to be strong through test-retest analysis. Finally, concurrent and discriminant validity was examined with correlations between the USDI and DASS subscales as well as the LSS, with sound results contributing to further support the construct validity of the scale. Cut-off points were also developed to aid total score interpretation. Limitations: Response rates are unclear. In addition, the representativeness of the sample could be improved potentially through targeted recruitment (i.e. reviewing the online sample statistics during data collection, examining the representativeness trends and addressing particular faculties within the university that were underrepresented). Conclusions: The USDI provides a valid and reliable method of assessing depressive symptoms found among university students.
Resumo:
In recent years considerable attention has been paid to the numerical solution of stochastic ordinary differential equations (SODEs), as SODEs are often more appropriate than their deterministic counterparts in many modelling situations. However, unlike the deterministic case numerical methods for SODEs are considerably less sophisticated due to the difficulty in representing the (possibly large number of) random variable approximations to the stochastic integrals. Although Burrage and Burrage [High strong order explicit Runge-Kutta methods for stochastic ordinary differential equations, Applied Numerical Mathematics 22 (1996) 81-101] were able to construct strong local order 1.5 stochastic Runge-Kutta methods for certain cases, it is known that all extant stochastic Runge-Kutta methods suffer an order reduction down to strong order 0.5 if there is non-commutativity between the functions associated with the multiple Wiener processes. This order reduction down to that of the Euler-Maruyama method imposes severe difficulties in obtaining meaningful solutions in a reasonable time frame and this paper attempts to circumvent these difficulties by some new techniques. An additional difficulty in solving SODEs arises even in the Linear case since it is not possible to write the solution analytically in terms of matrix exponentials unless there is a commutativity property between the functions associated with the multiple Wiener processes. Thus in this present paper first the work of Magnus [On the exponential solution of differential equations for a linear operator, Communications on Pure and Applied Mathematics 7 (1954) 649-673] (applied to deterministic non-commutative Linear problems) will be applied to non-commutative linear SODEs and methods of strong order 1.5 for arbitrary, linear, non-commutative SODE systems will be constructed - hence giving an accurate approximation to the general linear problem. Secondly, for general nonlinear non-commutative systems with an arbitrary number (d) of Wiener processes it is shown that strong local order I Runge-Kutta methods with d + 1 stages can be constructed by evaluated a set of Lie brackets as well as the standard function evaluations. A method is then constructed which can be efficiently implemented in a parallel environment for this arbitrary number of Wiener processes. Finally some numerical results are presented which illustrate the efficacy of these approaches. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
Organizations from every industry sector seek to enhance their business performance and competitiveness through the deployment of contemporary information systems (IS), such as Enterprise Systems (ERP). Investments in ERP are complex and costly, attracting scrutiny and pressure to justify their cost. Thus, IS researchers highlight the need for systematic evaluation of information system success, or impact, which has resulted in the introduction of varied models for evaluating information systems. One of these systematic measurement approaches is the IS-Impact Model introduced by a team of researchers at Queensland University of technology (QUT) (Gable, Sedera, & Chan, 2008). The IS-Impact Model is conceptualized as a formative, multidimensional index that consists of four dimensions. Gable et al. (2008) define IS-Impact as "a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups" (p.381). The IT Evaluation Research Program (ITE-Program) at QUT has grown the IS-Impact Research Track with the central goal of conducting further studies to enhance and extend the IS-Impact Model. The overall goal of the IS-Impact research track at QUT is "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable, 2009). In order to achieve that, the IS-Impact research track advocates programmatic research having the principles of tenacity, holism, and generalizability through extension research strategies. This study was conducted within the IS-Impact Research Track, to further generalize the IS-Impact Model by extending it to the Saudi Arabian context. According to Hofsted (2012), the national culture of Saudi Arabia is significantly different from the Australian national culture making the Saudi Arabian culture an interesting context for testing the external validity of the IS-Impact Model. The study re-visits the IS-Impact Model from the ground up. Rather than assume the existing instrument is valid in the new context, or simply assess its validity through quantitative data collection, the study takes a qualitative, inductive approach to re-assessing the necessity and completeness of existing dimensions and measures. This is done in two phases: Exploratory Phase and Confirmatory Phase. The exploratory phase addresses the first research question of the study "Is the IS-Impact Model complete and able to capture the impact of information systems in Saudi Arabian Organization?". The content analysis, used to analyze the Identification Survey data, indicated that 2 of the 37 measures of the IS-Impact Model are not applicable for the Saudi Arabian Context. Moreover, no new measures or dimensions were identified, evidencing the completeness and content validity of the IS-Impact Model. In addition, the Identification Survey data suggested several concepts related to IS-Impact, the most prominent of which was "Computer Network Quality" (CNQ). The literature supported the existence of a theoretical link between IS-Impact and CNQ (CNQ is viewed as an antecedent of IS-Impact). With the primary goal of validating the IS-Impact model within its extended nomological network, CNQ was introduced to the research model. The Confirmatory Phase addresses the second research question of the study "Is the Extended IS-Impact Model Valid as a Hierarchical Multidimensional Formative Measurement Model?". The objective of the Confirmatory Phase was to test the validity of IS-Impact Model and CNQ Model. To achieve that, IS-Impact, CNQ, and IS-Satisfaction were operationalized in a survey instrument, and then the research model was assessed by employing the Partial Least Squares (PLS) approach. The CNQ model was validated as a formative model. Similarly, the IS-Impact Model was validated as a hierarchical multidimensional formative construct. However, the analysis indicated that one of the IS-Impact Model indicators was insignificant and can be removed from the model. Thus, the resulting Extended IS-Impact Model consists of 4 dimensions and 34 measures. Finally, the structural model was also assessed against two aspects: explanatory and predictive power. The analysis revealed that the path coefficient between CNQ and IS-Impact is significant with t-value= (4.826) and relatively strong with â = (0.426) with CNQ explaining 18% of the variance in IS-Impact. These results supported the hypothesis that CNQ is antecedent of IS-Impact. The study demonstrates that the quality of Computer Network affects the quality of the Enterprise System (ERP) and consequently the impacts of the system. Therefore, practitioners should pay attention to the Computer Network quality. Similarly, the path coefficient between IS-Impact and IS-Satisfaction was significant t-value = (17.79) and strong â = (0.744), with IS-Impact alone explaining 55% of the variance in Satisfaction, consistent with results of the original IS-Impact study (Gable et al., 2008). The research contributions include: (a) supporting the completeness and validity of IS-Impact Model as a Hierarchical Multi-dimensional Formative Measurement Model in the Saudi Arabian context, (b) operationalizing Computer Network Quality as conceptualized in the ITU-T Recommendation E.800 (ITU-T, 1993), (c) validating CNQ as a formative measurement model and as an antecedent of IS Impact, and (d) conceptualizing and validating IS-Satisfaction as a reflective measurement model and as an immediate consequence of IS Impact. The CNQ model provides a framework to perceptually measure Computer Network Quality from multiple perspectives. The CNQ model features an easy-to-understand, easy-to-use, and economical survey instrument.
Resumo:
Since the first destination image studies were published in the early 1970s, the field has become one of the most popular in the tourism literature. While reviews of the destination image literature show no commonly agreed conceptualisation of the construct, researchers have predominantly used structured questionnaires for measurement. There has been criticism that the way some of these scales have been selected means a greater likelihood of attributes being irrelevant to participants. This opens up the risk of stimulating uninformed responses. The issue of uninformed response was first raised as a source of error 60 years ago. However, there has been little, if any, discussion in relation to destination image measurement, studies of which often require participants to provide opinion-driven rather than fact-based responses. This paper reports the trial of a ‘don’t know’ (DK) non-response option for participants in two destination image questionnaires. It is suggested the use of a DK option provides participants with an alternative to i) skipping the question, ii) using the scale midpoint to denote neutrality, or iii) providing an uninformed response. High levels of DK usage by participants can then alert the marketer of the need to improve awareness of destination performance for potential salient attributes.
Resumo:
This chapter provides a theoretically and empirically grounded discussion of participatory research methodologies with respect to investigating the dynamic and evolving phenomenon of young people constructing identities and social relations in “iScapes”. We coin the term iScapes to refer to the online/offline interconnectedness of spaces in the fabric of everyday life. Specifically, we offer a critical analysis of the participatory research methods we used in our own research project to understand the ways in which high school students use new media and information communication technologies (ICTs) to construct identities, form social relations, and engage in creative practices as part of their everyday lives. The chapter concludes with reflexive deliberations on our approach to participatory research that may benefit other researchers who share a similar interest in youth and new media.
Resumo:
The design and construction community has shown increasing interest in adopting building information models (BIMs). The richness of information provided by BIMs has the potential to streamline the design and construction processes by enabling enhanced communication, coordination, automation and analysis. However, there are many challenges in extracting construction-specific information out of BIMs. In most cases, construction practitioners have to manually identify the required information, which is inefficient and prone to error, particularly for complex, large-scale projects. This paper describes the process and methods we have formalized to partially automate the extraction and querying of construction-specific information from a BIM. We describe methods for analyzing a BIM to query for spatial information that is relevant for construction practitioners, and that is typically represented implicitly in a BIM. Our approach integrates ifcXML data and other spatial data to develop a richer model for construction users. We employ custom 2D topological XQuery predicates to answer a variety of spatial queries. The validation results demonstrate that this approach provides a richer representation of construction-specific information compared to existing BIM tools.
Resumo:
This work investigates the accuracy and efficiency tradeoffs between centralized and collective (distributed) algorithms for (i) sampling, and (ii) n-way data analysis techniques in multidimensional stream data, such as Internet chatroom communications. Its contributions are threefold. First, we use the Kolmogorov-Smirnov goodness-of-fit test to show that statistical differences between real data obtained by collective sampling in time dimension from multiple servers and that of obtained from a single server are insignificant. Second, we show using the real data that collective data analysis of 3-way data arrays (users x keywords x time) known as high order tensors is more efficient than centralized algorithms with respect to both space and computational cost. Furthermore, we show that this gain is obtained without loss of accuracy. Third, we examine the sensitivity of collective constructions and analysis of high order data tensors to the choice of server selection and sampling window size. We construct 4-way tensors (users x keywords x time x servers) and analyze them to show the impact of server and window size selections on the results.
Resumo:
Identifying the design features that impact construction is essential to developing cost effective and constructible designs. The similarity of building components is a critical design feature that affects method selection, productivity, and ultimately construction cost and schedule performance. However, there is limited understanding of what constitutes similarity in the design of building components and limited computer-based support to identify this feature in a building product model. This paper contributes a feature-based framework for representing and reasoning about component similarity that builds on ontological modelling, model-based reasoning and cluster analysis techniques. It describes the ontology we developed to characterize component similarity in terms of the component attributes, the direction, and the degree of variation. It also describes the generic reasoning process we formalized to identify component similarity in a standard product model based on practitioners' varied preferences. The generic reasoning process evaluates the geometric, topological, and symbolic similarities between components, creates groupings of similar components, and quantifies the degree of similarity. We implemented this reasoning process in a prototype cost estimating application, which creates and maintains cost estimates based on a building product model. Validation studies of the prototype system provide evidence that the framework is general and enables a more accurate and efficient cost estimating process.
Resumo:
In recent years, there has been a growing interest from the design and construction community to adopt Building Information Models (BIM). BIM provides semantically-rich information models that explicitly represent both 3D geometric information (e.g., component dimensions), along with non-geometric properties (e.g., material properties). While the richness of design information offered by BIM is evident, there are still tremendous challenges in getting construction-specific information out of BIM, limiting the usability of these models for construction. In this paper, we describe our approach for extracting construction-specific design conditions from a BIM model based on user-defined queries. This approach leverages an ontology of features we are developing to formalize the design conditions that affect construction. Our current implementation analyzes the component geometry and topological relationships between components in a BIM model represented using the Industry Foundation Classes (IFC) to identify construction features. We describe the reasoning process implemented to extract these construction features, and provide a critique of the IFC’s to support the querying process. We use examples from two case studies to illustrate the construction features, the querying process, and the challenges involved in deriving construction features from an IFC model.
Resumo:
The method of lines is a standard method for advancing the solution of partial differential equations (PDEs) in time. In one sense, the method applies equally well to space-fractional PDEs as it does to integer-order PDEs. However, there is a significant challenge when solving space-fractional PDEs in this way, owing to the non-local nature of the fractional derivatives. Each equation in the resulting semi-discrete system involves contributions from every spatial node in the domain. This has important consequences for the efficiency of the numerical solver, especially when the system is large. First, the Jacobian matrix of the system is dense, and hence methods that avoid the need to form and factorise this matrix are preferred. Second, since the cost of evaluating the discrete equations is high, it is essential to minimise the number of evaluations required to advance the solution in time. In this paper, we show how an effective preconditioner is essential for improving the efficiency of the method of lines for solving a quite general two-sided, nonlinear space-fractional diffusion equation. A key contribution is to show, how to construct suitable banded approximations to the system Jacobian for preconditioning purposes that permit high orders and large stepsizes to be used in the temporal integration, without requiring dense matrices to be formed. The results of numerical experiments are presented that demonstrate the effectiveness of this approach.
Resumo:
This paper presents a formal methodology for attack modeling and detection for networks. Our approach has three phases. First, we extend the basic attack tree approach 1 to capture (i) the temporal dependencies between components, and (ii) the expiration of an attack. Second, using the enhanced attack trees (EAT) we build a tree automaton that accepts a sequence of actions from input stream if there is a traverse of an attack tree from leaves to the root node. Finally, we show how to construct an enhanced parallel automaton (EPA) that has each tree automaton as a subroutine and can process the input stream by considering multiple trees simultaneously. As a case study, we show how to represent the attacks in IEEE 802.11 and construct an EPA for it.