940 resultados para DRAGON (Computer system)
Resumo:
This research-in-progress paper reports preliminary findings of a study that is designed to identify characteristics of an expert in the discipline of Information Systems (IS). The paper delivers a formative research model to depict characteristics of an expert with three additive constructs, using concepts derived from psychology, knowledge management and social-behaviour research. The paper then explores the formation and application ‘expertise’ using four investigative questions in the context of System Evaluations. Data have been gathered from 220 respondents representing three medium sized companies in India, using the SAP Enterprise Resource Planning system. The paper summarizes planned data analyses in construct validation, model testing and model application. A validated construct of expertise of IS will have a wide range of implications for research and practice.
Resumo:
Abstract—Computational Intelligence Systems (CIS) is one of advanced softwares. CIS has been important position for solving single-objective / reverse / inverse and multi-objective design problems in engineering. The paper hybridise a CIS for optimisation with the concept of Nash-Equilibrium as an optimisation pre-conditioner to accelerate the optimisation process. The hybridised CIS (Hybrid Intelligence System) coupled to the Finite Element Analysis (FEA) tool and one type of Computer Aided Design(CAD) system; GiD is applied to solve an inverse engineering design problem; reconstruction of High Lift Systems (HLS). Numerical results obtained by the hybridised CIS are compared to the results obtained by the original CIS. The benefits of using the concept of Nash-Equilibrium are clearly demonstrated in terms of solution accuracy and optimisation efficiency.
Resumo:
This paper discusses the level of effectiveness of quality principles and quality management system implementation and the relationship with performance of ISO9000 certified Indonesian contractors. It also discusses the statistical relationship between quality management systems (QMSs) and key performance indicators (KPIs) amongst a large sample of Indonesian construction companies. Data collected is from questionnaire surveys involving Quality Managers, Managers, and Project and Site Engineers representing 77 different companies. Results indicate that even though some contractors have not yet effectively implemented an effective QMS, most of the KPIs of respondent companies are still at the level of high performance. The statistical results show that the relationship between variables of ISO9000 QMS principles and contractors’ KPIs is significant. These results suggest that an increment in the implementation level of QMS principles can increase KPIs, however that much effort is still required for Indonesian contractors to fully effectively implement QMS principles and thus substantially improve performance against KPIs.
Resumo:
An automatic approach to road lane marking extraction from high-resolution aerial images is proposed, which can automatically detect the road surfaces in rural areas based on hierarchical image analysis. The procedure is facilitated by the road centrelines obtained from low-resolution images. The lane markings are further extracted on the generated road surfaces with 2D Gabor filters. The proposed method is applied on the aerial images of the Bruce Highway around Gympie, Queensland. Evaluation of the generated road surfaces and lane markings using four representative test fields has validated the proposed method.
Resumo:
Electricity has been the major source of power in most railway systems. Reliable, efficient and safe power distribution to the trains is vitally important to the overall quality of railway service. Like any large-scale engineering system, design, operation and planning of traction power systems rely heavily on computer simulation. This paper reviews the major features on modelling and the general practices for traction power system simulation; and introduces the development of the latest simulation approach with discussions on simulation results and practical applications. Remarks will also be given on the future challenges on traction power system simulation.
Resumo:
Existing recommendation systems often recommend products to users by capturing the item-to-item and user-to-user similarity measures. These types of recommendation systems become inefficient in people-to-people networks for people to people recommendation that require two way relationship. Also, existing recommendation methods use traditional two dimensional models to find inter relationships between alike users and items. It is not efficient enough to model the people-to-people network with two-dimensional models as the latent correlations between the people and their attributes are not utilized. In this paper, we propose a novel tensor decomposition-based recommendation method for recommending people-to-people based on users profiles and their interactions. The people-to-people network data is multi-dimensional data which when modeled using vector based methods tend to result in information loss as they capture either the interactions or the attributes of the users but not both the information. This paper utilizes tensor models that have the ability to correlate and find latent relationships between similar users based on both information, user interactions and user attributes, in order to generate recommendations. Empirical analysis is conducted on a real-life online dating dataset. As demonstrated in results, the use of tensor modeling and decomposition has enabled the identification of latent correlations between people based on their attributes and interactions in the network and quality recommendations have been derived using the 'alike' users concept.
Resumo:
Many modern business environments employ software to automate the delivery of workflows; whereas, workflow design and generation remains a laborious technical task for domain specialists. Several differ- ent approaches have been proposed for deriving workflow models. Some approaches rely on process data mining approaches, whereas others have proposed derivations of workflow models from operational struc- tures, domain specific knowledge or workflow model compositions from knowledge-bases. Many approaches draw on principles from automatic planning, but conceptual in context and lack mathematical justification. In this paper we present a mathematical framework for deducing tasks in workflow models from plans in mechanistic or strongly controlled work environments, with a focus around automatic plan generations. In addition, we prove an associative composition operator that permits crisp hierarchical task compositions for workflow models through a set of mathematical deduction rules. The result is a logical framework that can be used to prove tasks in workflow hierarchies from operational information about work processes and machine configurations in controlled or mechanistic work environments.
Resumo:
The widespread development of Decision Support System (DSS) in construction indicate that the evaluation of software become more important than before. However, it is identified that most research in construction discipline did not attempt to assess its usability. Therefore, little is known about the approach on how to properly evaluate a DSS for specific problem. In this paper, we present a practical framework that can be guidance for DSS evaluation. It focuses on how to evaluate software that is dedicatedly designed for consultant selection problem. The framework features two main components i.e. Sub-system Validation and Face Validation. Two case studies of consultant selection at Malaysian Department of Irrigation and Drainage were integrated in this framework. Some inter-disciplinary area such as Software Engineering, Human Computer Interaction (HCI) and Construction Project Management underpinned the discussion of the paper. It is anticipated that this work can foster better DSS development and quality decision making that accurately meet the client’s expectation and needs
Resumo:
Trusted health care outcomes are patient centric. Requirements to ensure both the quality and sharing of patients’ health records are a key for better clinical decision making. In the context of maintaining quality health, the sharing of data and information between professionals and patients is paramount. This information sharing is a challenge and costly if patients’ trust and institutional accountability are not established. Establishment of an Information Accountability Framework (IAF) is one of the approaches in this paper. The concept behind the IAF requirements are: transparent responsibilities, relevance of the information being used, and the establishment and evidence of accountability that all lead to the desired outcome of a Trusted Health Care System. Upon completion of this IAF framework the trust component between the public and professionals will be constructed. Preservation of the confidentiality and integrity of patients’ information will lead to trusted health care outcomes.
Study of the effectiveness of outrigger system for high-rise composite buildings for cyclonic region
Resumo:
The demands of taller structures are becoming imperative almost everywhere in the world in addition to the challenges of material and labor cost, project time line etc. This paper conducted a study keeping in view the challenging nature of high-rise construction with no generic rules for deflection minimizations and frequency control. The effects of cyclonic wind and provision of outriggers on 28-storey, 42-storey and 57-storey are examined in this paper and certain conclusions are made which would pave way for researchers to conduct further study in this particular area of civil engineering. The results show that plan dimensions have vital impacts on structural heights. Increase of height while keeping the plan dimensions same, leads to the reduction in the lateral rigidity. To achieve required stiffness increase of bracings sizes as well as introduction of additional lateral resisting system such as belt truss and outriggers is required.
Resumo:
Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.
Resumo:
A new system is described for estimating volume from a series of multiplanar 2D ultrasound images. Ultrasound images are captured using a personal computer video digitizing card and an electromagnetic localization system is used to record the pose of the ultrasound images. The accuracy of the system was assessed by scanning four groups of ten cadaveric kidneys on four different ultrasound machines. Scan image planes were oriented either radially, in parallel or slanted at 30 C to the vertical. The cross-sectional images of the kidneys were traced using a mouse and the outline points transformed to 3D space using the Fastrak position and orientation data. Points on adjacent region of interest outlines were connected to form a triangle mesh and the volume of the kidneys estimated using the ellipsoid, planimetry, tetrahedral and ray tracing methods. There was little difference between the results for the different scan techniques or volume estimation algorithms, although, perhaps as expected, the ellipsoid results were the least precise. For radial scanning and ray tracing, the mean and standard deviation of the percentage errors for the four different machines were as follows: Hitachi EUB-240, −3.0 ± 2.7%; Tosbee RM3, −0.1 ± 2.3%; Hitachi EUB-415, 0.2 ± 2.3%; Acuson, 2.7 ± 2.3%.