198 resultados para Cloud computing, Software as a Service, Modelli architetturali, Sviluppo applicazioni
Resumo:
The rapid growth of services available on the Internet and exploited through ever globalizing business networks poses new challenges for service interoperability. New services, from consumer “apps”, enterprise suites, platform and infrastructure resources, are vying for demand with quickly evolving and overlapping capabilities, and shorter cycles of extending service access from user interfaces to software interfaces. Services, drawn from a wider global setting, are subject to greater change and heterogeneity, demanding new requirements for structural and behavioral interface adaptation. In this paper, we analyze service interoperability scenarios in global business networks, and propose new patterns for service interactions, above those proposed over the last 10 years through the development of Web service standards and process choreography languages. By contrast, we reduce assumptions of design-time knowledge required to adapt services, giving way to run-time mismatch resolutions, extend the focus from bilateral to multilateral messaging interactions, and propose declarative ways in which services and interactions take part in long-running conversations via the explicit use of state.
Resumo:
Smartphones are steadily gaining popularity, creating new application areas as their capabilities increase in terms of computational power, sensors and communication. Emerging new features of mobile devices give opportunity to new threats. Android is one of the newer operating systems targeting smartphones. While being based on a Linux kernel, Android has unique properties and specific limitations due to its mobile nature. This makes it harder to detect and react upon malware attacks if using conventional techniques. In this paper, we propose an Android Application Sandbox (AASandbox) which is able to perform both static and dynamic analysis on Android programs to automatically detect suspicious applications. Static analysis scans the software for malicious patterns without installing it. Dynamic analysis executes the application in a fully isolated environment, i.e. sandbox, which intervenes and logs low-level interactions with the system for further analysis. Both the sandbox and the detection algorithms can be deployed in the cloud, providing a fast and distributed detection of suspicious software in a mobile software store akin to Google's Android Market. Additionally, AASandbox might be used to improve the efficiency of classical anti-virus applications available for the Android operating system.
Resumo:
In the modern connected world, pervasive computing has become reality. Thanks to the ubiquity of mobile computing devices and emerging cloud-based services, the users permanently stay connected to their data. This introduces a slew of new security challenges, including the problem of multi-device key management and single-sign-on architectures. One solution to this problem is the utilization of secure side-channels for authentication, including the visual channel as vicinity proof. However, existing approaches often assume confidentiality of the visual channel, or provide only insufficient means of mitigating a man-in-the-middle attack. In this work, we introduce QR-Auth, a two-step, 2D barcode based authentication scheme for mobile devices which aims specifically at key management and key sharing across devices in a pervasive environment. It requires minimal user interaction and therefore provides better usability than most existing schemes, without compromising its security. We show how our approach fits in existing authorization delegation and one-time-password generation schemes, and that it is resilient to man-in-the-middle attacks.
Resumo:
In the last decade, smartphones have gained widespread usage. Since the advent of online application stores, hundreds of thousands of applications have become instantly available to millions of smart-phone users. Within the Android ecosystem, application security is governed by digital signatures and a list of coarse-grained permissions. However, this mechanism is not fine-grained enough to provide the user with a sufficient means of control of the applications' activities. Abuse of highly sensible private information such as phone numbers without users' notice is the result. We show that there is a high frequency of privacy leaks even among widely popular applications. Together with the fact that the majority of the users are not proficient in computer security, this presents a challenge to the engineers developing security solutions for the platform. Our contribution is twofold: first, we propose a service which is able to assess Android Market applications via static analysis and provide detailed, but readable reports to the user. Second, we describe a means to mitigate security and privacy threats by automated reverse-engineering and refactoring binary application packages according to the users' security preferences.
Resumo:
Enterprise Systems (ES) can be understood as the de facto standard for holistic operational and managerial support within an organization. Most commonly ES are offered as commercial off-the-shelf packages, requiring customization in the user organization. This process is a complex and resource-intensive task, which often prevents small and midsize enterprises (SME) from undertaking configuration projects. Especially in the SME market independent software vendors provide pre-configured ES for a small customer base. The problem of ES configuration is shifted from the customer to the vendor, but remains critical. We argue that the yet unexplored link between process configuration and business document configuration must be closer examined as both types of configuration are closely tied to one another.
Resumo:
This article describes the architecture of a monitoring component for the YAWL system. The architecture proposed is based on sensors and it is realized as a YAWL service to have perfect integration with the YAWL systems. The architecture proposed is generic and applicable in different contexts of business process monitoring. Finally, it was tested and evaluated in the context of risk monitoring for business processes.
Resumo:
This paper describes the content and delivery of a software internationalisation subject (ITN677) that was developed for Master of Information Technology (MIT) students in the Faculty of Information Technology at Queensland University of Technology. This elective subject introduces students to the strategies, technologies, techniques and current development associated with this growing 'software development for the world' specialty area. Students learn what is involved in planning and managing a software internationalisation project as well as designing, building and using a software internationalisation application. Students also learn about how a software internationalisation project must fit into an over-all product localisation and globalisation that may include culturalisation, tailored system architectures, and reliance upon industry standards. In addition, students are exposed to the different software development techniques used by organizations in this arena and the perils and pitfalls of managing software internationalisation projects.
Resumo:
The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.
Resumo:
Queensland University of Technology (QUT) Library offers a range of resources and services to researchers as part of their research support portfolio. This poster will present key features of two of the data management services offered by research support staff at QUT Library. The first service is QUT Research Data Finder (RDF), a product of the Australian National Data Service (ANDS) funded Metadata Stores project. RDF is a data registry (metadata repository) that aims to publicise datasets that are research outputs arising from completed QUT research projects. The second is a software and code registry, which is currently under development with the sole purpose of improving discovery of source code and software as QUT research outputs. RESEARCH DATA FINDER As an integrated metadata repository, Research Data Finder aligns with institutional sources of truth, such as QUT’s research administration system, ResearchMaster, as well as QUT’s Academic Profiles system to provide high quality data descriptions that increase awareness of, and access to, shareable research data. The repository and its workflows are designed to foster better data management practices, enhance opportunities for collaboration and research, promote cross-disciplinary research and maximise the impact of existing research data sets. SOFTWARE AND CODE REGISTRY The QUT Library software and code registry project stems from concerns amongst researchers with regards to development activities, storage, accessibility, discoverability and impact, sharing, copyright and IP ownership of software and code. As a result, the Library is developing a registry for code and software research outputs, which will use existing Research Data Finder architecture. The underpinning software for both registries is VIVO, open source software developed by Cornell University. The registry will use the Research Data Finder service instance of VIVO and will include a searchable interface, links to code/software locations and metadata feeds to Research Data Australia. Key benefits of the project include:improving the discoverability and reuse of QUT researchers’ code and software amongst QUT and the QUT research community; increasing the profile of QUT research outputs on a national level by providing a metadata feed to Research Data Australia, and; improving the metrics for access and reuse of code and software in the repository.
Resumo:
There is a song at the beginning of the musical, West Side Story, where the character Tony sings that “something’s coming, something good.” The song is an anthem of optimism, brimming with promise. This paper is about the long-held promise of information and communication technology (ICT) to transform teaching and learning, to modernise the learning environment of the classroom, and to create a new digital pedagogy. But much of our experience to date in the schooling sector tells more of resistance and reaction than revolution, of more of the same but with a computer in the corner and of ICT activities as unwelcome time-fillers/time-wasters. Recently, a group of pre-service teachers in a postgraduate primary education degree in an Australian university were introduced to learning objects in an ICT immersion program. Their analyses and related responses, as recorded in online journals, have here been interpreted in terms of TPACK (Technological Pedagogical and Content Knowledge). Against contemporary observation, these students generally displayed high levels of competence and highly positive dispositions of students to the integration of ICT in their future classrooms. In short, they displayed the same optimism and confidence as the fictional “Tony” in believing that something good was coming.
Resumo:
The fastest-growing segment of jobs in the creative sector are in those firms that provide creative services to other sectors (Hearn, Goldsmith, Bridgstock, Rodgers 2014, this volume; Cunningham 2014, this volume). There are also a large number of Creative Services (Architecture and Design, Advertising and Marketing, Software and Digital Content occupations) workers embedded in organizations in other industry sectors (Cunningham and Higgs 2009). Ben Goldsmith (2014, this volume) shows, for example, that the Financial Services sector is the largest employer of digital creative talent in Australia. But why should this be? We argue it is because ‘knowledge-based intangibles are increasingly the source of value creation and hence of sustainable competitive advantage (Mudambi 2008, 186). This value creation occurs primarily at the research and development (R and D) and the marketing ends of the supply chain. Both of these areas require strong creative capabilities in order to design for, and to persuade, consumers. It is no surprise that Jess Rodgers (2014, this volume), in a study of Australia’s Manufacturing sector, found designers and advertising and marketing occupations to be the most numerous creative occupations. Greg Hearn and Ruth Bridgstock (2013, forthcoming) suggest ‘the creative heart of the creative economy […] is the social and organisational routines that manage the generation of cultural novelty, both tacit and codified, internal and external, and [cultural novelty’s] combination with other knowledges […] produce and capture value’. 2 Moreover, the main “social and organisational routine” is usually a team (for example, Grabher 2002; 2004).
Resumo:
Currently, the GNSS computing modes are of two classes: network-based data processing and user receiver-based processing. A GNSS reference receiver station essentially contributes raw measurement data in either the RINEX file format or as real-time data streams in the RTCM format. Very little computation is carried out by the reference station. The existing network-based processing modes, regardless of whether they are executed in real-time or post-processed modes, are centralised or sequential. This paper describes a distributed GNSS computing framework that incorporates three GNSS modes: reference station-based, user receiver-based and network-based data processing. Raw data streams from each GNSS reference receiver station are processed in a distributed manner, i.e., either at the station itself or at a hosting data server/processor, to generate station-based solutions, or reference receiver-specific parameters. These may include precise receiver clock, zenith tropospheric delay, differential code biases, ambiguity parameters, ionospheric delays, as well as line-of-sight information such as azimuth and elevation angles. Covariance information for estimated parameters may also be optionally provided. In such a mode the nearby precise point positioning (PPP) or real-time kinematic (RTK) users can directly use the corrections from all or some of the stations for real-time precise positioning via a data server. At the user receiver, PPP and RTK techniques are unified under the same observation models, and the distinction is how the user receiver software deals with corrections from the reference station solutions and the ambiguity estimation in the observation equations. Numerical tests demonstrate good convergence behaviour for differential code bias and ambiguity estimates derived individually with single reference stations. With station-based solutions from three reference stations within distances of 22–103 km the user receiver positioning results, with various schemes, show an accuracy improvement of the proposed station-augmented PPP and ambiguity-fixed PPP solutions with respect to the standard float PPP solutions without station augmentation and ambiguity resolutions. Overall, the proposed reference station-based GNSS computing mode can support PPP and RTK positioning services as a simpler alternative to the existing network-based RTK or regionally augmented PPP systems.
Resumo:
How do you identify "good" teaching practice in the complexity of a real classroom? How do you know that beginning teachers can recognise effective digital pedagogy when they see it? How can teacher educators see through their students’ eyes? The study in this paper has arisen from our interest in what pre-service teachers “see” when observing effective classroom practice and how this might reveal their own technological, pedagogical and content knowledge. We asked 104 pre-service teachers from Early Years, Primary and Secondary cohorts to watch and comment upon selected exemplary videos of teachers using ICT (information and communication technologies) in Science. The pre-service teachers recorded their observations using a simple PMI (plus, minus, interesting) matrix which were then coded using the SOLO Taxonomy to look for evidence of their familiarity with and judgements of digital pedagogies. From this, we determined that the majority of preservice teachers we surveyed were using a descriptive rather than a reflective strategy, that is, not extending beyond what was demonstrated in the teaching exemplar or differentiating between action and purpose. We also determined that this method warrants wider trialling as a means of evaluating students’ understandings of the complexity of the digital classroom.
Resumo:
This paper addresses the problem of computing the aggregate QoS of a composite service given the QoS of the services participating in the composition. Previous solutions to this problem are restricted to composite services with well-structured orchestration models. Yet, in existing languages such as WS-BPEL and BPMN, orchestration models may be unstructured. This paper lifts this limitation by providing equations to compute the aggregate QoS for general types of irreducible unstructured regions in orchestration models. In conjunction with existing algorithms for decomposing business process models into single-entry-single-exit regions, these functions allow us to cover a larger set of orchestration models than existing QoS aggregation techniques.