990 resultados para Call level interfaces


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Call Level Interfaces (CLI) play a key role in business tiers of relational and on some NoSQL database applications whenever a fine tune control between application tiers and the host databases is a key requirement. Unfortunately, in spite of this significant advantage, CLI are low level API, this way not addressing high level architectural requirements. Among the examples we emphasize two situations: a) the need to decouple or not to decouple the development process of business tiers from the development process of application tiers and b) the need to automatically adapt business tiers to new business and/or security needs at runtime. To tackle these CLI drawbacks, and simultaneously keep their advantages, this paper proposes an architecture relying on CLI from which multi-purpose business tiers components are built, herein referred to as Adaptable Business Tier Components (ABTC). Beyond the reference architecture, this paper presents a proof of concept based on Java and Java Database Connectivity (an example of CLI).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To store, update and retrieve data from database management systems (DBMS), software architects use tools, like call-level interfaces (CLI), which provide standard functionalities to interact with DBMS. However, the emerging of NoSQL paradigm, and particularly new NoSQL DBMS providers, lead to situations where some of the standard functionalities provided by CLI are not supported, very often due to their distance from the relational model or due to design constraints. As such, when a system architect needs to evolve, namely from a relational DBMS to a NoSQL DBMS, he must overcome the difficulties conveyed by the features not provided by NoSQL DBMS. Choosing the wrong NoSQL DBMS risks major issues with components requesting non-supported features. This paper focuses on how to deploy features that are not so commonly supported by NoSQL DBMS (like Stored Procedures, Transactions, Save Points and interactions with local memory structures) by implementing them in standard CLI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Call Level Interfaces (CLI) are low level API that play a key role in database applications whenever a fine tune control between application tiers and the host databases is a key requirement. Unfortunately, in spite of this significant advantage, CLI were not designed to address organizational requirements and contextual runtime requirements. Among the examples we emphasize the need to decouple or not to decouple the development process of business tiers from the development process of application tiers and also the need to automatically adapt to new business and/or security needs at runtime. To tackle these CLI drawbacks, and simultaneously keep their advantages, this paper proposes an architecture relying on CLI from which multi-purpose business tiers components are built, herein referred to as Adaptable Business Tier Components (ABTC). This paper presents the reference architecture for those components and a proof of concept based on Java and Java Database Connectivity (an example of CLI).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Access control is a software engineering challenge in database applications. Currently, there is no satisfactory solution to dynamically implement evolving fine-grained access control mechanisms (FGACM) on business tiers of relational database applications. To tackle this access control gap, we propose an architecture, herein referred to as Dynamic Access Control Architecture (DACA). DACA allows FGACM to be dynamically built and updated at runtime in accordance with the established fine-grained access control policies (FGACP). DACA explores and makes use of Call Level Interfaces (CLI) features to implement FGACM on business tiers. Among the features, we emphasize their performance and their multiple access modes to data residing on relational databases. The different access modes of CLI are wrapped by typed objects driven by FGACM, which are built and updated at runtime. Programmers prescind of traditional access modes of CLI and start using the ones dynamically implemented and updated. DACA comprises three main components: Policy Server (repository of metadata for FGACM), Dynamic Access Control Component (DACC) (business tier component responsible for implementing FGACM) and Policy Manager (broker between DACC and Policy Server). Unlike current approaches, DACA is not dependent on any particular access control model or on any access control policy, this way promoting its applicability to a wide range of different situations. In order to validate DACA, a solution based on Java, Java Database Connectivity (JDBC) and SQL Server was devised and implemented. Two evaluations were carried out. The first one evaluates DACA capability to implement and update FGACM dynamically, at runtime, and, the second one assesses DACA performance against a standard use of JDBC without any FGACM. The collected results show that DACA is an effective approach for implementing evolving FGACM on business tiers based on Call Level Interfaces, in this case JDBC.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Science gateways can provide access to distributed computing resources and applications at very different levels of granularity. Some gateways do not even hide the details of the underlying infrastructure, while on the other end some provide completely customized high-level interfaces to end-users. In this chapter the different granularity levels at which science gateways can be developed with WS-PGRADE/gUSE are analysed. The differences between these various granu-larity levels are also illustrated via the example of a molecular docking gateway and its four different implementations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose adding a temporal dimension to stakeholder management theory, and assess the implications thereof for firm-level competitive advantage. We argue that a firm’s competitive advantage fundamentally depends on its capacity for stakeholder management related, transformational adaptation over time. Our new temporal stakeholder management approach builds upon insights from both the resource-based view (RBV) in strategic management and institutional theory. Stakeholder agendas and their relative salience to the firm evolve over time, a phenomenon well understood in the literature, and requiring what we call level 1 adaptation. However, the dominant direction of stakeholder pressures can also change, namely, from supporting resource heterogeneity at the firm level to fostering industry homogeneity, and vice versa. When dominant stakeholder pressures shift from supporting heterogeneity towards stimulating homogeneity in industry, the firm must engage in level 2 or transformational adaptation. Stakeholders typically provide valuable resources to the firm in an early stage. Without these resources, which foster heterogeneity (in line with RBV thinking), the firm would not exist. At a later stage, stakeholders also contribute to inter-firm homogeneity via isomorphism pressures (in line with institutional theory thinking). Adding a temporal dimension to stakeholder management theory has far reaching implications for this theory’s practical relevance to senior level management in business.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Space-charge-limited currents measurements have been carried out on undoped amorphous poly p-phenylene sulfide. The scaling law is checked for different samples with varying thickness, and J-V data analyzed. The position of the quasi-Fermi level and the density of states was obtained.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

B-ISDN is a universal network which supports diverse mixes of service, applications and traffic. ATM has been accepted world-wide as the transport technique for future use in B-ISDN. ATM, being a simple packet oriented transfer technique, provides a flexible means for supporting a continuum of transport rates and is efficient due to possible statistical sharing of network resources by multiple users. In order to fully exploit the potential statistical gain, while at the same time provide diverse service and traffic mixes, an efficient traffic control must be designed. Traffic controls which include congestion and flow control are a fundamental necessity to the success and viability of future B-ISDN. Congestion and flow control is difficult in the broadband environment due to the high speed link, the wide area distance, diverse service requirements and diverse traffic characteristics. Most congestion and flow control approaches in conventional packet switched networks are reactive in nature and are not applicable in the B-ISDN environment. In this research, traffic control procedures mainly based on preventive measures for a private ATM-based network are proposed and their performance evaluated. The various traffic controls include CAC, traffic flow enforcement, priority control and an explicit feedback mechanism. These functions operate at call level and cell level. They are carried out distributively by the end terminals, the network access points and the internal elements of the network. During the connection set-up phase, the CAC decides the acceptance or denial of a connection request and allocates bandwidth to the new connection according to three schemes; peak bit rate, statistical rate and average bit rate. The statistical multiplexing rate is based on a `bufferless fluid flow model' which is simple and robust. The allocation of an average bit rate to data traffic at the expense of delay obviously improves the network bandwidth utilisation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Network Interfaces (NIs) are used in Multiprocessor System-on-Chips (MPSoCs) to connect CPUs to a packet switched Network-on-Chip. In this work we introduce a new NI architecture for our hierarchical CoreVA-MPSoC. The CoreVA-MPSoC targets streaming applications in embedded systems. The main contribution of this paper is a system-level analysis of different NI configurations, considering both software and hardware costs for NoC communication. Different configurations of the NI are compared using a benchmark suite of 10 streaming applications. The best performing NI configuration shows an average speedup of 20 for a CoreVA-MPSoC with 32 CPUs compared to a single CPU. Furthermore, we present physical implementation results using a 28 nm FD-SOI standard cell technology. A hierarchical MPSoC with 8 CPU clusters and 4 CPUs in each cluster running at 800MHz requires an area of 4.56mm2.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Call centres have emerged during a time of rapid technological change and represent a form of ready employment for those seeking to replace or supplement "traditional" forms of employment. Call centre work is considered characteristic of the kinds of service work available in the new economy. This paper examines the experiences and practices of lower level managers in a call centre in southern Ontario. Findings are based on analysis of semi-structured interviews. The findings suggest that lower level managers resolve the contradictory social space they occupy by aligning themselves primarily with more powerful executives, in part because they know this might lead to increased job security. The implications of this trend for building a strong labour movement capable of combating neoliberal discourses regarding the need for work restructuring are discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The clean development mechanism (CDM) has been through a long and complex growing process since it was approved as part of the Kyoto Protocol. It was designed within the framework of the UNFCCC and the Kyoto Protocol, and reflected the political and economic realities of that time. To ensure its continued effectiveness in contributing to future global climate action and to reflect on how best to position the CDM to respond to future challenges, a high-level panel (HLP) was formed at the Durban climate change conference in 2011. Following extensive consultations, the panel published its report in September 2012. Through this Special Report, the CEPS Carbon Market Forum offers its reflections on findings and recommendations of the HLP, as well as, by extension, its own views on the future of the CDM. In the context of the latter, it explores the following questions: Is there a need for an instrument such as the CDM in the future? What ‘demand’ can it fill? In the roles identified under the first question, what can be done to adapt it and also continue to increase its efficacy?

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: To identify any association between the response priority code generated during calls to the ambulance communication centre and patient reports of pain severity.

Methods: A retrospective analysis of patient care records was undertaken for all patients transported by paramedics over a 7-day period. The primary research interest was the association between the response code allocated at the time of telephone triage and the initial pain severity score recorded using a numeric rating scale (NRS). Univariate and multivariate logistic regression methods were used to analyse the association between the response priority variable and explanatory variables.

Results: There were 1246 cases in which both an initial pain score using the NRS and a response code were recorded. Of these cases, 716/1246 (57.5%) were associated with a code 1 ("time-critical") response. After adjusting for gender, age, cause of pain and duration of pain, a multivariate logistic regression analysis found no significant change in the odds of a patient in pain receiving a time-critical response compared with patients who had no pain, regardless of their initial pain score (NRS 1–3, odds ratio (OR) 1.11, 95% CI 0.7 to 1.8; NRS 4–7, OR 1.12, 95% CI 0.7 to 1.8; NRS 8–10, OR 0.84, 95% CI 0.5 to 1.4).

Conclusion: The severity of pain experienced by the patient appeared to have no influence on the priority (urgency) of the dispatch response. Triage systems used to prioritise ambulance calls and decide the urgency of response or type of referral options should consider pain severity to facilitate timely and humane care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are many reasons why interface design for interactive courseware fails to support quality of learning experiences. The causes such as the level of interactivity, the availability of the interfaces to interact with the end users and a lack of deep knowledge about the role of interface design by the designers in the development process are most acknowledged. Related to this, as a creator for the interactive courseware, generally the developers expect the resources that they produced are effective, accurate and robust. However, rarely do the developers have the opportunity to create good interfaces with the emphasis on time consuming, money and skill. Thus, some challenges faces by them in the interface design development can’t be underestimated as well. Therefore, their perspective of the interactive courseware is important to ensure the material and also the features of the interactive courseware can facilitate teaching and learning activity. Within this context in mind, this paper highlights the challenges that faces by the Malaysian developer from the ten face to face interviewed data gathered. It discusses from the Malaysian developer perspectives that involved in the development of interface design for interactive courseware for the Smart School Project. Particularly, in creating such a great interfaces, the highlights challenges will present within the constraints of time, curriculum demand, and competencies of the development team.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many software applications extend their functionality by dynamically loading executable components into their allocated address space. Such components, exemplified by browser plugins and other software add-ons, not only enable reusability, but also promote programming simplicity, as they reside in the same address space as their host application, supporting easy sharing of complex data structures and pointers. However, such components are also often of unknown provenance and quality and may be riddled with accidental bugs or, in some cases, deliberately malicious code. Statistics show that such component failures account for a high percentage of software crashes and vulnerabilities. Enabling isolation of such fine-grained components is therefore necessary to increase the stability, security and resilience of computer programs. This thesis addresses this issue by showing how host applications can create isolation domains for individual components, while preserving the benefits of a single address space, via a new architecture for software isolation called LibVM. Towards this end, we define a specification which outlines the functional requirements for LibVM, identify the conditions under which these functional requirements can be met, define an abstract Application Programming Interface (API) that encompasses the general problem of isolating shared libraries, thus separating policy from mechanism, and prove its practicality with two concrete implementations based on hardware virtualization and system call interpositioning, respectively. The results demonstrate that hardware isolation minimises the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution’s correctness. This thesis concludes that, not only is it feasible to create such isolation domains for individual components, but that it should also be a fundamental operating system supported abstraction, which would lead to more stable and secure applications.