965 resultados para Application Programming Interface
Resumo:
The Denial of Service Testing Framework (dosTF) being developed as part of the joint India-Australia research project for ‘Protecting Critical Infrastructure from Denial of Service Attacks’ allows for the construction, monitoring and management of emulated Distributed Denial of Service attacks using modest hardware resources. The purpose of the testbed is to study the effectiveness of different DDoS mitigation strategies and to allow for the testing of defense appliances. Experiments are saved and edited in XML as abstract descriptions of an attack/defense strategy that is only mapped to real resources at run-time. It also provides a web-application portal interface that can start, stop and monitor an attack remotely. Rather than monitoring a service under attack indirectly, by observing traffic and general system parameters, monitoring of the target application is performed directly in real time via a customised SNMP agent.
Resumo:
The YAWL Worklet Service is an effective approach to facilitating dynamic flexibility and exception handling in workflow processes. Recent additions to the Service extend its capabilities through a programming interface that provides easier access to rules storage and evaluation, and an event server that notifies listening servers and applications when exceptions are detected, which together serve enhance the functionality and accessibility of the Service's features and expand its usability to new potential domains.
Resumo:
Scholarly research into the uses of social media has become a major area of growth in recent years, as the adoption of social media for public communication itself has continued apace. While social media platforms provide ready avenues for data access through their Application Programming interfaces, it is increasingly important to think through exactly what these data represent, and what conclusions about the role of social media in society the research which is based on such data therefore enables. This article explores these issues especially for one of the currently leading social media platforms: Twitter.
Resumo:
Social media analytics is a rapidly developing field of research at present: new, powerful ‘big data’ research methods draw on the Application Programming Interfaces (APIs) of social media platforms. Twitter has proven to be a particularly productive space for such methods development, initially due to the explicit support and encouragement of Twitter, Inc. However, because of the growing commercialisation of Twitter data, and the increasing API restrictions imposed by Twitter, Inc., researchers are now facing a considerably less welcoming environment, and are forced to find additional funding for paid data access, or to bend or break the rules of the Twitter API. This article considers the increasingly precarious nature of ‘big data’ Twitter research, and flags the potential consequences of this shift for academic scholarship.
Resumo:
In recent years, XML has been accepted as the format of messages for several applications. Prominent examples include SOAP for Web services, XMPP for instant messaging, and RSS and Atom for content syndication. This XML usage is understandable, as the format itself is a well-accepted standard for structured data, and it has excellent support for many popular programming languages, so inventing an application-specific format no longer seems worth the effort. Simultaneously with this XML's rise to prominence there has been an upsurge in the number and capabilities of various mobile devices. These devices are connected through various wireless technologies to larger networks, and a goal of current research is to integrate them seamlessly into these networks. These two developments seem to be at odds with each other. XML as a fully text-based format takes up more processing power and network bandwidth than binary formats would, whereas the battery-powered nature of mobile devices dictates that energy, both in processing and transmitting, be utilized efficiently. This thesis presents the work we have performed to reconcile these two worlds. We present a message transfer service that we have developed to address what we have identified as the three key issues: XML processing at the application level, a more efficient XML serialization format, and the protocol used to transfer messages. Our presentation includes both a high-level architectural view of the whole message transfer service, as well as detailed descriptions of the three new components. These components consist of an API, and an associated data model, for XML processing designed for messaging applications, a binary serialization format for the data model of the API, and a message transfer protocol providing two-way messaging capability with support for client mobility. We also present relevant performance measurements for the service and its components. As a result of this work, we do not consider XML to be inherently incompatible with mobile devices. As the fixed networking world moves toward XML for interoperable data representation, so should the wireless world also do to provide a better-integrated networking infrastructure. However, the problems that XML adoption has touch all of the higher layers of application programming, so instead of concentrating simply on the serialization format we conclude that improvements need to be made in an integrated fashion in all of these layers.
Resumo:
In the increasingly enlarged exploration target, deep target layer(especially for the reservoir of lava) is a potential exploration area. As well known, the reflective energy becomes weak because the seismic signals of reflection in deep layer are absorbed and attenuate by upper layer. Caustics and multi-values traveltime in wavefield are aroused by the complexity of stratum. The ratio of signal to noise is not high and the fold numbers are finite(no more than 30). All the factors above affect the validity of conventional processing methods. So the high S/N section of stack can't always be got with the conventional stack methods even if the prestack depth migration is used. So it is inevitable to develop another kind of stack method instead. In the last a few years, the differential solution of wave equation was hold up by the condition of computation. Kirchhoff integral method rose in the initial stages of the ninetieth decade of last century. But there exist severe problems in it, which is are too difficult to resolve, so new method of stack is required for the oil and gas exploration. It is natural to think about upgrading the traditionally physic base of seismic exploration methods and improving those widely used techniques of stack. On the other hand, great progress is depended on the improvement in the wave differential equation prestack depth migration. The algorithm of wavefield continuation in it is utilized. In combination with the wavefield extrapolation and the Fresnel zone stack, new stack method is carried out It is well known that the seismic wavefield observed on surface comes from Fresnel zone physically, and doesn't comes from the same reflection points only. As to the more complex reflection in deep layer, it is difficult to describe the relationship between the reflective interface and the travel time. Extrapolation is used to eliminate caustic and simplify the expression of travel time. So the image quality is enhanced by Fresnel zone stack in target. Based on wave equation, high-frequency ray solution and its character are given to clarify theoretical foundation of the method. The hyperbolic and parabolic travel time of the reflection in layer media are presented in expression of matrix with paraxial ray theory. Because the reflective wave field mainly comes from the Fresnel Zone, thereby the conception of Fresnel Zone is explained. The matrix expression of Fresnel zone and projected Fresnel zone are given in sequence. With geometrical optics, the relationship between object point in model and image point in image space is built for the complex subsurface. The travel time formula of reflective point in the nonuniform media is deduced. Also the formula of reflective segment of zero-offset and nonzero offset section is provided. For convenient application, the interface model of subsurface and curve surface derived from conventional stacks DMO stack and prestack depth migration are analyzed, and the problem of these methods was pointed out in aspects of using data. Arc was put forward to describe the subsurface, thereby the amount of data to stack enlarged in Fresnel Zone. Based on the formula of hyperbolic travel time, the steps of implementation and the flow of Fresnel Zone stack were provided. The computation of three model data shows that the method of Fresnel Zone stack can enhance the signal energy and the ratio of signal to noise effectively. Practical data in Xui Jia Wei Zhi, a area in Daqing oilfield, was processed with this method. The processing results showed that the ability in increasing S/N ratio and enhancing the continuity of weak events as well as confirming the deep configuration of volcanic reservoir is better than others. In deeper target layer, there exists caustic caused by the complex media overburden and the great variation of velocity. Travel time of reflection can't be exactly described by the formula of travel time. Extrapolation is bring forward to resolve the questions above. With the combination of the phase operator and differential operator, extrapolating operator adaptable to the variation of lateral velocity is provided. With this method, seismic records were extrapolated from surface to any different deptlis below. Wave aberration and caustic caused by the inhomogenous layer overburden were eliminated and multi-value curve was transformed into the curve.of single value. The computation of Marmousi shows that it is feasible. Wave field continuation extends the Fresnel Zone stack's application.
Resumo:
As the commoditization of sensing, actuation and communication hardware increases, so does the potential for dynamically tasked sense and respond networked systems (i.e., Sensor Networks or SNs) to replace existing disjoint and inflexible special-purpose deployments (closed-circuit security video, anti-theft sensors, etc.). While various solutions have emerged to many individual SN-centric challenges (e.g., power management, communication protocols, role assignment), perhaps the largest remaining obstacle to widespread SN deployment is that those who wish to deploy, utilize, and maintain a programmable Sensor Network lack the programming and systems expertise to do so. The contributions of this thesis centers on the design, development and deployment of the SN Workbench (snBench). snBench embodies an accessible, modular programming platform coupled with a flexible and extensible run-time system that, together, support the entire life-cycle of distributed sensory services. As it is impossible to find a one-size-fits-all programming interface, this work advocates the use of tiered layers of abstraction that enable a variety of high-level, domain specific languages to be compiled to a common (thin-waist) tasking language; this common tasking language is statically verified and can be subsequently re-translated, if needed, for execution on a wide variety of hardware platforms. snBench provides: (1) a common sensory tasking language (Instruction Set Architecture) powerful enough to express complex SN services, yet simple enough to be executed by highly constrained resources with soft, real-time constraints, (2) a prototype high-level language (and corresponding compiler) to illustrate the utility of the common tasking language and the tiered programming approach in this domain, (3) an execution environment and a run-time support infrastructure that abstract a collection of heterogeneous resources into a single virtual Sensor Network, tasked via this common tasking language, and (4) novel formal methods (i.e., static analysis techniques) that verify safety properties and infer implicit resource constraints to facilitate resource allocation for new services. This thesis presents these components in detail, as well as two specific case-studies: the use of snBench to integrate physical and wireless network security, and the use of snBench as the foundation for semester-long student projects in a graduate-level Software Engineering course.
Resumo:
This paper presents innovative work in the development of policy-based autonomic computing. The core of the work is a powerful and flexible policy-expression language AGILE, which facilitates run-time adaptable policy configuration of autonomic systems. AGILE also serves as an integrating platform for other self-management technologies including signal processing, automated trend analysis and utility functions. Each of these technologies has specific advantages and applicability to different types of dynamic adaptation. The AGILE platform enables seamless interoperability of the different technologies to each perform various aspects of self-management within a single application. The various technologies are implemented as object components. Self-management behaviour is specified using the policy language semantics to bind the various components together as required. Since the policy semantics support run-time re-configuration, the self-management architecture is dynamically composable. Additional benefits include the standardisation of the application programmer interface, terminology and semantics, and only a single point of embedding is required.
Resumo:
In this paper, we present a methodology for implementing a complete Digital Signal Processing (DSP) system onto a heterogeneous network including Field Programmable Gate Arrays (FPGAs) automatically. The methodology aims to allow design refinement and real time verification at the system level. The DSP application is constructed in the form of a Data Flow Graph (DFG) which provides an entry point to the methodology. The netlist for parts that are mapped onto the FPGA(s) together with the corresponding software and hardware Application Protocol Interface (API) are also generated. Using a set of case studies, we demonstrate that the design and development time can be significantly reduced using the methodology developed.
Resumo:
Não é novidade que o paradigma vigente baseia-se na Internet, em que cada vez mais aplicações mudam o seu modelo de negócio relativamente a licenciamento e manutenção, para passar a oferecer ao utilizador final uma aplicação mais acessível no que concerne a licenciamento e custos de manutenção, já que as aplicações se encontram distribuídas eliminando os custos de capitais e operacionais inerentes a uma arquitetura centralizada. Com a disseminação das Interfaces de Programação de Aplicações (Application Programming Interfaces – API) baseadas na Internet, os programadores passaram a poder desenvolver aplicações que utilizam funcionalidades disponibilizadas por terceiros, sem terem que as programar de raiz. Neste conceito, a API das aplicações Google® permitem a distribuição de aplicações a um mercado muito vasto e a integração com ferramentas de produtividade, sendo uma oportunidade para a difusão de ideias e conceitos. Este trabalho descreve o processo de conceção e implementação de uma plataforma, usando as tecnologias HTML5, Javascript, PHP e MySQL com integração com ®Google Apps, com o objetivo de permitir ao utilizador a preparação de orçamentos, desde o cálculo de preços de custo compostos, preparação dos preços de venda, elaboração do caderno de encargos e respetivo cronograma.
Resumo:
DBMODELING is a relational database of annotated comparative protein structure models and their metabolic, pathway characterization. It is focused on enzymes identified in the genomes of Mycobacterium tuberculosis and Xylella fastidiosa. The main goal of the present database is to provide structural models to be used in docking simulations and drug design. However, since the accuracy of structural models is highly dependent on sequence identity between template and target, it is necessary to make clear to the user that only models which show high structural quality should be used in such efforts. Molecular modeling of these genomes generated a database, in which all structural models were built using alignments presenting more than 30% of sequence identity, generating models with medium and high accuracy. All models in the database are publicly accessible at http://www.biocristalografia.df.ibilce.unesp.br/tools. DBMODELING user interface provides users friendly menus, so that all information can be printed in one stop from any web browser. Furthermore, DBMODELING also provides a docking interface, which allows the user to carry out geometric docking simulation, against the molecular models available in the database. There are three other important homology model databases: MODBASE, SWISSMODEL, and GTOP. The main applications of these databases are described in the present article. © 2007 Bentham Science Publishers Ltd.
Resumo:
This paper aims to describe the basic concepts and necessary for Java programs can invoke libraries of programming language C/C ++, through the JNA API. We used a library developed in C/C ++ called Glass [8], which offers a solution for viewing 3D graphics, using graphics clusters, reducing the cost of viewing. The purpose of the work is to interact with the humanoid developed using Java, which makes movements of LIBRAS language for the deaf, as Glass's, so that through this they can view the information using stereoscopic multi-view in full size. ©2010 IEEE.
Resumo:
To support development tools like debuggers, runtime systems need to provide a meta-programming interface to alter their semantics and access internal data. Reflective capabilities are typically fixed by the Virtual Machine (VM). Unanticipated reflective features must either be simulated by complex program transformations, or they require the development of a specially tailored VM. We propose a novel approach to behavioral reflection that eliminates the barrier between applications and the VM by manipulating an explicit tower of first-class interpreters. Pinocchio is a proof-of-concept implementation of our approach which enables radical changes to the interpretation of programs by explicitly instantiating subclasses of the base interpreter. We illustrate the design of Pinocchio through non-trivial examples that extend runtime semantics to support debugging, parallel debugging, and back-in-time object-flow debugging. Although performance is not yet addressed, we also discuss numerous opportunities for optimization, which we believe will lead to a practical approach to behavioral reflection.
Resumo:
Recent copyright cases on both sides of the Atlantic focused on important interoperability issues. While the decision by the Court of Justice of the European Union in SAS Institute, Inc.v. World Programming Ltd. assessed data formats under the EU Software Directive, the ruling by the Northern District of California Court in Oracle America, Inc. v. Google Inc. dealt with application programming interfaces. The European decision is rightly celebrated as a further important step in the promotion of interoperability in the EU. This article argues that, despite appreciable signs of convergence across the Atlantic, the assessment of application programming interfaces under EU law could still turn out to be quite different, and arguably much less pro-interoperability, than under U.S. law.
Resumo:
On that date , the Spanish affiliate offered for the 1st. time to its customers courses oriented toward the user, not the product, within the area of programming, in the subarea of application programming