897 resultados para CAPABILITIES
Resumo:
The IEEE Subcommittee on the Application of Probability Methods (APM) published the IEEE Reliability Test System (RTS) [1] in 1979. This system provides a consistent and generally acceptable set of data that can be used both in generation capacity and in composite system reliability evaluation [2,3]. The test system provides a basis for the comparison of results obtained by different people using different methods. Prior to its publication, there was no general agreement on either the system or the data that should be used to demonstrate or test various techniques developed to conduct reliability studies. Development of reliability assessment techniques and programs are very dependent on the intent behind the development as the experience of one power utility with their system may be quite different from that of another utility. The development and the utilization of a reliability program are, therefore, greatly influenced by the experience of a utlity and the intent of the system manager, planner and designer conducting the reliability studies. The IEEE-RTS has proved to be extremely valuable in highlighting and comparing the capabilities (or incapabilities) of programs used in reliability studies, the differences in the perception of various power utilities and the differences in the solution techniques. The IEEE-RTS contains a reasonably large power network which can be difficult to use for initial studies in an educational environment.
Resumo:
The R statistical environment and language has demonstrated particular strengths for interactive development of statistical algorithms, as well as data modelling and visualisation. Its current implementation has an interpreter at its core which may result in a performance penalty in comparison to directly executing user algorithms in the native machine code of the host CPU. In contrast, the C++ language has no built-in visualisation capabilities, handling of linear algebra or even basic statistical algorithms; however, user programs are converted to high-performance machine code, ahead of execution. A new method avoids possible speed penalties in R by using the Rcpp extension package in conjunction with the Armadillo C++ matrix library. In addition to the inherent performance advantages of compiled code, Armadillo provides an easy-to-use template-based meta-programming framework, allowing the automatic pooling of several linear algebra operations into one, which in turn can lead to further speedups. With the aid of Rcpp and Armadillo, conversion of linear algebra centered algorithms from R to C++ becomes straightforward. The algorithms retains the overall structure as well as readability, all while maintaining a bidirectional link with the host R environment. Empirical timing comparisons of R and C++ implementations of a Kalman filtering algorithm indicate a speedup of several orders of magnitude.
Resumo:
What is the contribution of innovation brokers in leveraging research and development (R&D) investment to enhance industry-wide capabilities? The case of the Australian Cooperative Research Centre for Construction Innovation (CRC CI) is considered in the context of motivating supply chain firms to improve their organizational capabilities in order to acquire, assimilate, transfer and exploit R&D outcomes to their advantage, and to create broader industry and national benefits. A previous audit and analysis has shown an increase in business R&D investment since 2001. The role of the CRC CI in contributing to growth in the absorptive capacity of the Australian construction industry as a whole is illustrated through two programmes: digital modelling building information modelling (BIM) and construction site safety. Numerous positive outcomes in productivity, quality, improved safety and competitiveness were achieved between 2001 and 2009.
Resumo:
NeSSi (network security simulator) is a novel network simulation tool which incorporates a variety of features relevant to network security distinguishing it from general-purpose network simulators. Its capabilities such as profile-based automated attack generation, traffic analysis and support for detection algorithm plug-ins allow it to be used for security research and evaluation purposes. NeSSi has been successfully used for testing intrusion detection algorithms, conducting network security analysis and developing overlay security frameworks. NeSSi is built upon the agent framework JIAC, resulting in a distributed and extensible architecture. In this paper, we provide an overview of the NeSSi architecture as well as its distinguishing features and briefly demonstrate its application to current security research projects.
Resumo:
The determination of performance standards and assessment practices in regard to student work placements is an essential and important task. Inappropriate, inadequate, or excessively complex assessment tasks can influence levels of student engagement and the quality of learning outcomes. Critical to determining appropriate standards and assessment tasks is an understanding and knowledge of key elements of the learning environment and the extent to which opportunities are provided for students to engage in critical reflection and judgement of their own performance in the contexts of the work environment. This paper focuses on the development of essential skills and knowledge (capabilities) that provide evidence of learning in work placements by describing an approach taken in the science and technology disciplines. Assessment matrices are presented to illustrate a method of assessment for use within the context of the learning environment centred on work placements in science and technology. This study contributes to the debate on the meaning of professional capability, performance standards and assessment practices in work placement programs by providing evidence of an approach that can be adapted by other programs to achieve similar benefits. The approach may also be valuable to other learning contexts where capability and performance are being judged in situations that are outside a controlled teaching and learning environment i.e. in other life-wide learning contexts.
Resumo:
Educators are faced with many challenging questions in designing an effective curriculum. What prerequisite knowledge do students have before commencing a new subject? At what level of mastery? What is the spread of capabilities between bare-passing students vs. the top performing group? How does the intended learning specification compare to student performance at the end of a subject? In this paper we present a conceptual model that helps in answering some of these questions. It has the following main capabilities: capturing the learning specification in terms of syllabus topics and outcomes; capturing mastery levels to model progression; capturing the minimal vs. aspirational learning design; capturing confidence and reliability metrics for each of these mappings; and finally, comparing and reflecting on the learning specification against actual student performance. We present a web-based implementation of the model, and validate it by mapping the final exams from four programming subjects against the ACM/IEEE CS2013 topics and outcomes, using Bloom's Taxonomy as the mastery scale. We then import the itemised exam grades from 632 students across the four subjects and compare the demonstrated student performance against the expected learning for each of these. Key contributions of this work are the validated conceptual model for capturing and comparing expected learning vs. demonstrated performance, and a web-based implementation of this model, which is made freely available online as a community resource.
Resumo:
The rapid growth of services available on the Internet and exploited through ever globalizing business networks poses new challenges for service interoperability. New services, from consumer “apps”, enterprise suites, platform and infrastructure resources, are vying for demand with quickly evolving and overlapping capabilities, and shorter cycles of extending service access from user interfaces to software interfaces. Services, drawn from a wider global setting, are subject to greater change and heterogeneity, demanding new requirements for structural and behavioral interface adaptation. In this paper, we analyze service interoperability scenarios in global business networks, and propose new patterns for service interactions, above those proposed over the last 10 years through the development of Web service standards and process choreography languages. By contrast, we reduce assumptions of design-time knowledge required to adapt services, giving way to run-time mismatch resolutions, extend the focus from bilateral to multilateral messaging interactions, and propose declarative ways in which services and interactions take part in long-running conversations via the explicit use of state.
Resumo:
Can China improve the competitiveness of its culture in world markets? Should it focus less on quantity and more on quality? How should Chinese cultural producers and distributors target audiences overseas? These are important questions facing policy makers today. In this paper I investigate how China might best deploy its soft power capabilities: for instance, should it try to demonstrate that it is a creative, innovative nation, capable of original ideas? Or should it put the emphasis on validating its credentials as an enduring culture and civilisation? In order to investigate these questions I introduce the cultural innovation timeline, a model that explains how China is adding value. There are six stages in the timeline but I will focus in particular on how the timeline facilitates cultural trade. In the second part of the paper I look at some of the challenges facing China, particularly the reception of its cultural products in international markets.
Resumo:
The dynamic capabilities view (DCV) focuses on renewal of firms’ strategic knowledge resources so as to sustain competitive advantage within turbulent markets. Within the context of the DCV, the focus of knowledge management (KM) is to develop the KMC through deploying knowledge governance mechanisms that are conducive to facilitating knowledge processes so as to produce superior business performance over time. The essence of KM performance evaluation is to assess how well the KMC is configured with knowledge governance mechanisms and processes that enable a firm to achieve superior performance through matching its knowledge base with market needs. However, little research has been undertaken to evaluate KM performance from the DCV perspective. This study employed a survey study design and adopted hypothesis-testing approaches to develop a capability-based KM evaluation framework (CKMEF) that upholds the basic assertions of the DCV. Under the governance of the framework, a KM index (KMI) and a KM maturity model (KMMM) were derived not only to indicate the extent to which a firm’s KM implementations fulfill its strategic objectives, and to identify the evolutionary phase of its KMC, but also to bench-mark the KMC in the research population. The research design ensured that the evaluation framework and instruments have statistical significance and good generalizabilty to be applied in the research population, namely construction firms operating in the dynamic Hong Kong construction market. The study demonstrated the feasibility of quantitatively evaluating the development of the KMC and revealing the performance heterogeneity associated with the development.
Resumo:
Our daily lives become more and more dependent upon smartphones due to their increased capabilities. Smartphones are used in various ways from payment systems to assisting the lives of elderly or disabled people. Security threats for these devices become increasingly dangerous since there is still a lack of proper security tools for protection. Android emerges as an open smartphone platform which allows modification even on operating system level. Therefore, third-party developers have the opportunity to develop kernel-based low-level security tools which is not normal for smartphone platforms. Android quickly gained its popularity among smartphone developers and even beyond since it bases on Java on top of "open" Linux in comparison to former proprietary platforms which have very restrictive SDKs and corresponding APIs. Symbian OS for example, holding the greatest market share among all smartphone OSs, was closing critical APIs to common developers and introduced application certification. This was done since this OS was the main target for smartphone malwares in the past. In fact, more than 290 malwares designed for Symbian OS appeared from July 2004 to July 2008. Android, in turn, promises to be completely open source. Together with the Linux-based smartphone OS OpenMoko, open smartphone platforms may attract malware writers for creating malicious applications endangering the critical smartphone applications and owners� privacy. In this work, we present our current results in analyzing the security of Android smartphones with a focus on its Linux side. Our results are not limited to Android, they are also applicable to Linux-based smartphones such as OpenMoko Neo FreeRunner. Our contribution in this work is three-fold. First, we analyze android framework and the Linux-kernel to check security functionalities. We survey wellaccepted security mechanisms and tools which can increase device security. We provide descriptions on how to adopt these security tools on Android kernel, and provide their overhead analysis in terms of resource usage. As open smartphones are released and may increase their market share similar to Symbian, they may attract attention of malware writers. Therefore, our second contribution focuses on malware detection techniques at the kernel level. We test applicability of existing signature and intrusion detection methods in Android environment. We focus on monitoring events on the kernel; that is, identifying critical kernel, log file, file system and network activity events, and devising efficient mechanisms to monitor them in a resource limited environment. Our third contribution involves initial results of our malware detection mechanism basing on static function call analysis. We identified approximately 105 Executable and Linking Format (ELF) executables installed to the Linux side of Android. We perform a statistical analysis on the function calls used by these applications. The results of the analysis can be compared to newly installed applications for detecting significant differences. Additionally, certain function calls indicate malicious activity. Therefore, we present a simple decision tree for deciding the suspiciousness of the corresponding application. Our results present a first step towards detecting malicious applications on Android-based devices.
Resumo:
Many older people have difficulties using modern consumer products due to increased product complexity both in terms of functionality and interface design. Previous research has shown that older people have more difficulty in using complex devices intuitively when compared to the younger. Furthermore, increased life expectancy and a falling birth rate have been catalysts for changes in world demographics over the past two decades. This trend also suggests a proportional increase of older people in the work-force. This realisation has led to research on the effective use of technology by older populations in an effort to engage them more productively and to assist them in leading independent lives. Ironically, not enough attention has been paid to the development of interaction design strategies that would actually enable older users to better exploit new technologies. Previous research suggests that if products are designed to reflect people's prior knowledge, they will appear intuitive to use. Since intuitive interfaces utilise domain-specific prior knowledge of users, they require minimal learning for effective interaction. However, older people are very diverse in their capabilities and domain-specific prior knowledge. In addition, ageing also slows down the process of acquiring new knowledge. Keeping these suggestions and limitations in view, the aim of this study was set to investigate possible approaches to developing interfaces that facilitate their intuitive use by older people. In this quest to develop intuitive interfaces for older people, two experiments were conducted that systematically investigated redundancy (the use of both text and icons) in interface design, complexity of interface structure (nested versus flat), and personal user factors such as cognitive abilities, perceived self-efficacy and technology anxiety. All of these factors could interfere with intuitive use. The results from the first experiment suggest that, contrary to what was hypothesised, older people (65+ years) completed the tasks on the text only based interface design faster than on the redundant interface design. The outcome of the second experiment showed that, as expected, older people took more time on a nested interface. However, they did not make significantly more errors compared with younger age groups. Contrary to what was expected, older age groups also did better under anxious conditions. The findings of this study also suggest that older age groups are more heterogeneous in their capabilities and their intuitive use of contemporary technological devices is mediated more by domain-specific technology prior knowledge and by their cognitive abilities, than chronological age. This makes it extremely difficult to develop product interfaces that are entirely intuitive to use. However, by keeping in view the cognitive limitations of older people when interfaces are developed, and using simple text-based interfaces with flat interface structure, would help them intuitively learn and use complex technological products successfully during early encounter with a product. These findings indicate that it might be more pragmatic if interfaces are designed for intuitive learning rather than for intuitive use. Based on this research and the existing literature, a model for adaptable interface design as a strategy for developing intuitively learnable product interfaces was proposed. An adaptable interface can initially use a simple text only interface to help older users to learn and successfully use the new system. Over time, this can be progressively changed to a symbols-based nested interface for more efficient and intuitive use.
Resumo:
Smartphones become very critical part of our lives as they offer advanced capabilities with PC-like functionalities. They are getting widely deployed while not only being used for classical voice-centric communication. New smartphone malwares keep emerging where most of them still target Symbian OS. In the case of Symbian OS, application signing seemed to be an appropriate measure for slowing down malware appearance. Unfortunately, latest examples showed that signing can be bypassed resulting in new malware outbreak. In this paper, we present a novel approach to static malware detection in resource-limited mobile environments. This approach can be used to extend currently used third-party application signing mechanisms for increasing malware detection capabilities. In our work, we extract function calls from binaries in order to apply our clustering mechanism, called centroid. This method is capable of detecting unknown malwares. Our results are promising where the employed mechanism might find application at distribution channels, like online application stores. Additionally, it seems suitable for directly being used on smartphones for (pre-)checking installed applications.
Resumo:
Smartphones are steadily gaining popularity, creating new application areas as their capabilities increase in terms of computational power, sensors and communication. Emerging new features of mobile devices give opportunity to new threats. Android is one of the newer operating systems targeting smartphones. While being based on a Linux kernel, Android has unique properties and specific limitations due to its mobile nature. This makes it harder to detect and react upon malware attacks if using conventional techniques. In this paper, we propose an Android Application Sandbox (AASandbox) which is able to perform both static and dynamic analysis on Android programs to automatically detect suspicious applications. Static analysis scans the software for malicious patterns without installing it. Dynamic analysis executes the application in a fully isolated environment, i.e. sandbox, which intervenes and logs low-level interactions with the system for further analysis. Both the sandbox and the detection algorithms can be deployed in the cloud, providing a fast and distributed detection of suspicious software in a mobile software store akin to Google's Android Market. Additionally, AASandbox might be used to improve the efficiency of classical anti-virus applications available for the Android operating system.
Resumo:
Driven by the rapid development of ubiquitous and pervasive computing, personalized services and applications are deployed to support our lives. Accordingly, the number of interfaces and devices (smartphone, tablet computer, etc.) provided to access and consume these services is growing continuously. To simplify the complexity of managing many accounts with different credentials, Single Sign-On (SSO) solutions have been introduced. However, a single password for many accounts represents a single-point-of-failure. Furthermore, once initiated SSO session is a high potential risk when the working station is left unlocked and unattended. In this paper, we present a conception of a Persistent Single Sign-On (PSSO) for ubiquitous home environments by involving the capabilities of Behavioral Biometrics to check the identity of the user continuously in an unobtrusive manner.
Resumo:
Knowledge has been widely recognised as a determinant of business performance. Business capabilities require an effective share of resource and knowledge. Specifically, knowledge sharing (KS) between different companies and departments can improve manufacturing processes since intangible knowledge plays an enssential role in achieving competitive advantage. This paper presents a mixed method research study into the impact of KS on the effectiveness of new product development (NPD) in achieving desired business performance (BP). Firstly, an empirical study utilising moderated regression analysis was conducted to test whether and to what extent KS has leveraging power on the relationship between NPD and BP constructs and variables. Secondly, this empirically verified hypothesis was validated through explanatory case studies involving two Taiwanese manufacturing companies using a qualitative interaction term pattern matching technique. The study provides evidence that knowledge sharing and management activities are essential for deriving competitive advantage in the manufacturing industry.