830 resultados para AUTOMATED DOCKING


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recommender systems are one of the recent inventions to deal with ever growing information overload in relation to the selection of goods and services in a global economy. Collaborative Filtering (CF) is one of the most popular techniques in recommender systems. The CF recommends items to a target user based on the preferences of a set of similar users known as the neighbours, generated from a database made up of the preferences of past users. With sufficient background information of item ratings, its performance is promising enough but research shows that it performs very poorly in a cold start situation where there is not enough previous rating data. As an alternative to ratings, trust between the users could be used to choose the neighbour for recommendation making. Better recommendations can be achieved using an inferred trust network which mimics the real world "friend of a friend" recommendations. To extend the boundaries of the neighbour, an effective trust inference technique is required. This thesis proposes a trust interference technique called Directed Series Parallel Graph (DSPG) which performs better than other popular trust inference algorithms such as TidalTrust and MoleTrust. Another problem is that reliable explicit trust data is not always available. In real life, people trust "word of mouth" recommendations made by people with similar interests. This is often assumed in the recommender system. By conducting a survey, we can confirm that interest similarity has a positive relationship with trust and this can be used to generate a trust network for recommendation. In this research, we also propose a new method called SimTrust for developing trust networks based on user's interest similarity in the absence of explicit trust data. To identify the interest similarity, we use user's personalised tagging information. However, we are interested in what resources the user chooses to tag, rather than the text of the tag applied. The commonalities of the resources being tagged by the users can be used to form the neighbours used in the automated recommender system. Our experimental results show that our proposed tag-similarity based method outperforms the traditional collaborative filtering approach which usually uses rating data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Through the rise of cloud computing, on-demand applications, and business networks, services are increasingly being exposed and delivered on the Internet and through mobile communications. So far, services have mainly been described through technical interface descriptions. The description of business details, such as pricing, service-level, or licensing, has been neglected and is therefore hard to automatically process by service consumers. Also, third-party intermediaries, such as brokers, cloud providers, or channel partners, are interested in the business details in order to extend services and their delivery and, thus, further monetize services. In this paper, the constructivist design of the UnifiedServiceDescriptionLanguage (USDL), aimed at describing services across the human-to-automation continuum, is presented. The proposal of USDL follows well-defined requirements which are expressed against a common service discourse and synthesized from currently available servicedescription efforts. USDL's concepts and modules are evaluated for their support of the different requirements and use cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the development of enterprise informatisation, Product Lifecycle Management (PLM) systems have been widely deployed and applied in enterprises. This paper analyzes the requirement that conducting version operations on business objects as specified in process models should be compliant with the versioning policies imposed by product lifecycles. This leads to the introduction of the concept of versioning compliance, and the approach of compliance checking that we proposed in our earlier work, which comprises both syntactical compatibility and behavioural compatibility checking. The paper then focuses on the tool implementation for providing automated support to the versioning compliance checking. An empirical evaluation of the tool was also performed with industrial partners using the well-known questionnaire-based method. The evaluation and feedback from practitioners further evidence the practical significance of this research question in the PLM field and demonstrate that the proposed solution with its automated tool support possesses a high application potential.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives This prospective study investigated the effects of caffeine ingestion on the extent of adenosine-induced perfusion abnormalities during myocardial perfusion imaging (MPI). Methods Thirty patients with inducible perfusion abnormalities on standard (caffeine-abstinent) adenosine MPI underwent repeat testing with supplementary coffee intake. Baseline and test MPIs were assessed for stress percent defect, rest percent defect, and percent defect reversibility. Plasma levels of caffeine and metabolites were assessed on both occasions and correlated with MPI findings. Results Despite significant increases in caffeine [mean difference 3,106 μg/L (95% CI 2,460 to 3,752 μg/L; P < .001)] and metabolite concentrations over a wide range, there was no statistically significant change in stress percent defect and percent defect reversibility between the baseline and test scans. The increase in caffeine concentration between the baseline and the test phases did not affect percent defect reversibility (average change −0.003 for every 100 μg/L increase; 95% CI −0.17 to 0.16; P = .97). Conclusion There was no significant relationship between the extent of adenosine-induced coronary flow heterogeneity and the serum concentration of caffeine or its principal metabolites. Hence, the stringent requirements for prolonged abstinence from caffeine before adenosine MPI—based on limited studies—appear ill-founded.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proving security of cryptographic schemes, which normally are short algorithms, has been known to be time-consuming and easy to get wrong. Using computers to analyse their security can help to solve the problem. This thesis focuses on methods of using computers to verify security of such schemes in cryptographic models. The contributions of this thesis to automated security proofs of cryptographic schemes can be divided into two groups: indirect and direct techniques. Regarding indirect ones, we propose a technique to verify the security of public-key-based key exchange protocols. Security of such protocols has been able to be proved automatically using an existing tool, but in a noncryptographic model. We show that under some conditions, security in that non-cryptographic model implies security in a common cryptographic one, the Bellare-Rogaway model [11]. The implication enables one to use that existing tool, which was designed to work with a different type of model, in order to achieve security proofs of public-key-based key exchange protocols in a cryptographic model. For direct techniques, we have two contributions. The first is a tool to verify Diffie-Hellmanbased key exchange protocols. In that work, we design a simple programming language for specifying Diffie-Hellman-based key exchange algorithms. The language has a semantics based on a cryptographic model, the Bellare-Rogaway model [11]. From the semantics, we build a Hoare-style logic which allows us to reason about the security of a key exchange algorithm, specified as a pair of initiator and responder programs. The other contribution to the direct technique line is on automated proofs for computational indistinguishability. Unlike the two other contributions, this one does not treat a fixed class of protocols. We construct a generic formalism which allows one to model the security problem of a variety of classes of cryptographic schemes as the indistinguishability between two pieces of information. We also design and implement an algorithm for solving indistinguishability problems. Compared to the two other works, this one covers significantly more types of schemes, but consequently, it can verify only weaker forms of security.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Effective self-management of diabetes is essential for the reduction of diabetes-related complications, as global rates of diabetes escalate. Methods: Randomised controlled trial. Adults with type 2 diabetes (n = 120), with HbA1c greater than or equal to 7.5 %, were randomly allocated (4 × 4 block randomised block design) to receive an automated, interactive telephone-delivered management intervention or usual routine care. Baseline sociodemographic, behavioural and medical history data were collected by self-administered questionnaires and biological data were obtained during hospital appointments. Health-related quality of life (HRQL) was measured using the SF-36. Results: The mean age of participants was 57.4 (SD 8.3), 63 % of whom were male. There were no differences in demographic, socioeconomic and behavioural variables between the study arms at baseline. Over the six-month period from baseline, participants receiving the Australian TLC (Telephone-Linked Care) Diabetes program showed a 0.8 % decrease in geometric mean HbA1c from 8.7 % to 7.9 %, compared with a 0.2 % HbA1c reduction (8.9 % to 8.7 %) in the usual care arm (p = 0.002). There was also a significant improvement in mental HRQL, with a mean increase of 1.9 in the intervention arm, while the usual care arm decreased by 0.8 (p = 0.007). No significant improvements in physical HRQL were observed. Conclusions: These analyses indicate the efficacy of the Australian TLC Diabetes program with clinically significant post-intervention improvements in both glycaemic control and mental HRQL. These observed improvements, if supported and maintained by an ongoing program such as this, could significantly reduce diabetes-related complications in the longer term. Given the accessibility and feasibility of this kind of program, it has strong potential for providing effective, ongoing support to many individuals with diabetes in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Australia, speeding remains a substantial contributor to road trauma. The National Road Safety Strategy (2011-2020) highlighted the need to harness community support for current and future speed management strategies. Australia is known for intensive speed camera programs which are both automated and manual, employing covert and overt methods. Recent developments in the area of automated speed enforcement in Australia help to illustrate the important link between community attitudes to speed enforcement and subsequent speed camera policy developments. A perceived lack of community confidence in camera programs prompted reviews in New South Wales and Victoria in 2011 by the jurisdictional Auditor-General. This paper explores automated speed camera enforcement in Australia with particular reference to the findings of these two reports as they relate to the level of public support for and community attitudes towards automated speed enforcement. It also provides comment on the evolving nature of automated speed enforcement according to previously identified controversies and dilemmas associated with speed camera programs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A comprehensive one-dimensional meanline design approach for radial inflow turbines is described in the present work. An original code was developed in Python that takes a novel approach to the automatic selection of feasible machines based on pre-defined performance or geometry characteristics for a given application. It comprises a brute-force search algorithm that traverses the entire search space based on key non-dimensional parameters and rotational speed. In this study, an in-depth analysis and subsequent implementation of relevant loss models as well as selection criteria for radial inflow turbines is addressed. Comparison with previously published designs, as well as other available codes, showed good agreement. Sample (real and theoretical) test cases were trialed and results showed good agreement when compared to other available codes. The presented approach was found to be valid and the model was found to be a useful tool with regards to the preliminary design and performance estimation of radial inflow turbines, enabling its integration with other thermodynamic cycle analysis and three-dimensional blade design codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes in detail our Security-Critical Program Analyser (SCPA). SCPA is used to assess the security of a given program based on its design or source code with regard to data flow-based metrics. Furthermore, it allows software developers to generate a UML-like class diagram of their program and annotate its confidential classes, methods and attributes. SCPA is also capable of producing Java source code for the generated design of a given program. This source code can then be compiled and the resulting Java bytecode program can be used by the tool to assess the program's overall security based on our security metrics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to increased complexity, scale, and functionality of information and telecommunication (IT) infrastructures, every day new exploits and vulnerabilities are discovered. These vulnerabilities are most of the time used by ma¬licious people to penetrate these IT infrastructures for mainly disrupting business or stealing intellectual pro¬perties. Current incidents prove that it is not sufficient anymore to perform manual security tests of the IT infra¬structure based on sporadic security audits. Instead net¬works should be continuously tested against possible attacks. In this paper we present current results and challenges towards realizing automated and scalable solutions to identify possible attack scenarios in an IT in¬frastructure. Namely, we define an extensible frame¬work which uses public vulnerability databases to identify pro¬bable multi-step attacks in an IT infrastructure, and pro¬vide recommendations in the form of patching strategies, topology changes, and configuration updates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a concrete approach for the automatic mitigation of risks that are detected during process enactment. Given a process model exposed to risks, e.g. a financial process exposed to the risk of approval fraud, we enact this process and as soon as the likelihood of the associated risk(s) is no longer tolerable, we generate a set of possible mitigation actions to reduce the risks' likelihood, ideally annulling the risks altogether. A mitigation action is a sequence of controlled changes applied to the running process instance, taking into account a snapshot of the process resources and data, and the current status of the system in which the process is executed. These actions are proposed as recommendations to help process administrators mitigate process-related risks as soon as they arise. The approach has been implemented in the YAWL environment and its performance evaluated. The results show that it is possible to mitigate process-related risks within a few minutes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study assessed the workday step counts of lower active (<10,000 daily steps) university employees using an automated, web-based walking intervention (Walk@Work). METHODS: Academic and administrative staff (n=390; 45.6±10.8years; BMI 27.2±5.5kg/m2; 290 women) at five campuses (Australia [x2], Canada, Northern Ireland and the United States), were given a pedometer, access to the website program (2010-11) and tasked with increasing workday walking by 1000 daily steps above baseline, every two weeks, over a six week period. Step count changes at four weeks post intervention were evaluated relative to campus and baseline walking. RESULTS: Across the sample, step counts significantly increased from baseline to post-intervention (1477 daily steps; p=0.001). Variations in increases were evident between campuses (largest difference of 870 daily steps; p=0.04) and for baseline activity status. Those least active at baseline (<5000 daily steps; n=125) increased step counts the most (1837 daily steps; p=0.001), whereas those most active (7500-9999 daily steps; n=79) increased the least (929 daily steps; p=0.001). CONCLUSIONS: Walk@Work increased workday walking by 25% in this sample overall. Increases occurred through an automated program, at campuses in different countries, and were most evident for those most in need of intervention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automated process discovery techniques aim at extracting models from information system logs in order to shed light into the business processes supported by these systems. Existing techniques in this space are effective when applied to relatively small or regular logs, but otherwise generate large and spaghetti-like models. In previous work, trace clustering has been applied in an attempt to reduce the size and complexity of automatically discovered process models. The idea is to split the log into clusters and to discover one model per cluster. The result is a collection of process models -- each one representing a variant of the business process -- as opposed to an all-encompassing model. Still, models produced in this way may exhibit unacceptably high complexity. In this setting, this paper presents a two-way divide-and-conquer process discovery technique, wherein the discovered process models are split on the one hand by variants and on the other hand hierarchically by means of subprocess extraction. The proposed technique allows users to set a desired bound for the complexity of the produced models. Experiments on real-life logs show that the technique produces collections of models that are up to 64% smaller than those extracted under the same complexity bounds by applying existing trace clustering techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article reports on the design and implementation of a Computer-Aided Die Design System (CADDS) for sheet-metal blanks. The system is designed by considering several factors, such as the complexity of blank geometry, reduction in scrap material, production requirements, availability of press equipment and standard parts, punch profile complexity, and tool elements manufacturing method. The interaction among these parameters and how they affect designers' decision patterns is described. The system is implemented by interfacing AutoCAD with the higher level languages FORTRAN 77 and AutoLISP. A database of standard die elements is created by parametric programming, which is an enhanced feature of AutoCAD. The greatest advantage achieved by the system is the rapid generation of the most efficient strip and die layouts, including information about the tool configuration.