880 resultados para Functional Requirements for Authority Data (FRAD)
Resumo:
Specification of the non-functional requirements of applications and determining the required resources for their execution are activities that demand a great deal of technical knowledge, frequently resulting in an inefficient use of resources. Cloud computing is an alternative for provisioning of resources, which can be done using either the provider's own infrastructure or the infrastructure of one or more public clouds, or even a combination of both. It enables more flexibly/elastic use of resources, but does not solve the specification problem. In this paper we present an approach that uses models at runtime to facilitate the specification of non-functional requirements and resources, aiming to facilitate dynamic support for application execution in cloud computing environments with shared resources. © 2013 IEEE.
Resumo:
As a functioning performing arts centre, commercial enterprise, tourist attraction and major national asset, Sydney Opera House must continue to demonstrate the optimal use and effectiveness of its facilities management (FM) to provide value for its stakeholders. To better achieve this, the Cooperative Research Centre for Construction Innovation focussed on the following three themes for investigation in the FM Exemplar Project — Sydney Opera House: digital modelling — developing a building information model capable of integrating information from disparate software systems and hard copy, and combining this with a spatial 3D computeraided design (CAD)/geographic information system (GIS) platform. This model offers a visual representation of the building and its component elements in 3D, and provides comprehensive information on each element. The model can work collaboratively through an open data exchange standard (common to all compliant software) in order to mine the data required to further FM objectives (such as maintenance) more efficiently and effectively. services procurement — developing a multi-criteria performance-based procurement framework aligned with organisational objectives for FM service delivery performance benchmarking — developing an FM benchmarking framework that enables facilities/ organisations to develop key performance indicators (KPIs) to identify better practice and improvement strategies. These three research stream outcomes were then aligned within the broader context of Sydney Opera House’s Total Asset Management (TAM) Plan and Strategic Asset Maintenance (SAM) Plan in arriving at a business framework aligned with, and in support of, organisational objectives. The Sydney Opera House is managed by the Sydney Opera House Trust on behalf of the Government of the State of New South Wales. Within the framework of the TAM Plan prepared in accordance with NSW Treasury Guidelines, the assimilation of these three themes provides an integrated FM solution capable of supporting Sydney Opera House’s business objectives and functional requirements. FM as a business enabler showcases innovative methods in improving FM performance, a better alignment of service and performance objectives and provides a better-practice model to support the business enterprise.
Resumo:
Service compositions enable users to realize their complex needs as a single request. Despite intensive research, especially in the area of business processes, web services and grids, an open and valid question is still how to manage service compositions in order to satisfy both functional and non-functional requirements as well as adapt to dynamic changes. In this paper we propose an (functional) architecture for adaptive management of QoS-aware service compositions. Comparing to the other existing architectures this one offers two major advantages. Firstly, this architecture supports various execution strategies based on dynamic selection and negotiation of services included in a service composition, contracting based on service level agreements, service enactment with flexible support for exception handling, monitoring of service level objectives, and profiling of execution data. Secondly, the architecture is built on the basis of well know existing standards to communicate and exchange data, which significantly reduces effort to integrate existing solutions and tools from different vendors. A first prototype of this architecture has been implemented within an EU-funded Adaptive Service Grid project. © 2006 Springer-Verlag.
Resumo:
Dissertação de Natureza Científica para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Edificações
Resumo:
In metazoans, bone morphogenetic proteins (BMPS) direct a myriad of developmental and adult homeostatic evens through their heterotetrameric type I and type II receptor complexes. We examined 3 existing and 12 newly generated mutations in the Drosophila type I receptor gene, saxophone (sax), the ortholog of the human Activin Receptor-Like. Kinasel and -2 (ALK1/ACVR1 and ALK2/ACVR1) genes. Our genetic analyses identified two distinct classes of sax alleles. The first class consists of homozygous viable gain-of-function (GOF) alleles that exhibit (1) synthetic lethality in combination with mutations in BMP pathway components, and (2) significant maternal effect lethality that can be rescued by an increased dosage of the BMP encoding gene, dpp(+). In contrast, the second class consists of alleles that are recessive lethal and do not exhibit lethality in combination with mutations in other BMP pathway components. The alleles in this second class are clearly loss-of-function (LOF) with both complete and partial loss-of-function mutations represented. We find that one allele in the second class of recessive lethals exhibits dominant-negative behavior, albeit distinct from the GOF activity of the first class of viable alleles. On the basis of the fact that the first class of viable alleles can be reverted to lethality and on our ability to independently generate recessive lethal sat mutations, our analysis demonstrates that sax is an essential gene. Consistent with this conclusion, we find that a normal sax transcript is produced by sax(P), a viable allele previously reported to be mill, and that this allele can be reverted to lethality. Interestingly, we determine that two mutations in the first: class of sax alleles show the same amino acid substitutions as mutations in the human receptors ALK1/ACVR1-1 and ACVR1/ALK2, responsible for cases of hereditary hemorrhagic telangiectasia type 2 (HHT2) and fibrodysplasia ossificans progressiva (FOP), respectively. Finally, the data presented here identify different functional requirements for the Sax receptor, support the proposal that Sax participates in a heteromeric receptor complex, and provide a mechanistic framework for future investigations into disease states that arise from defects in BMP/TGF-beta signaling.
Resumo:
Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.
Resumo:
Real-Time Kinematic (RTK) positioning is a technique used to provide precise positioning services at centimetre accuracy level in the context of Global Navigation Satellite Systems (GNSS). While a Network-based RTK (N-RTK) system involves multiple continuously operating reference stations (CORS), the simplest form of a NRTK system is a single-base RTK. In Australia there are several NRTK services operating in different states and over 1000 single-base RTK systems to support precise positioning applications for surveying, mining, agriculture, and civil construction in regional areas. Additionally, future generation GNSS constellations, including modernised GPS, Galileo, GLONASS, and Compass, with multiple frequencies have been either developed or will become fully operational in the next decade. A trend of future development of RTK systems is to make use of various isolated operating network and single-base RTK systems and multiple GNSS constellations for extended service coverage and improved performance. Several computational challenges have been identified for future NRTK services including: • Multiple GNSS constellations and multiple frequencies • Large scale, wide area NRTK services with a network of networks • Complex computation algorithms and processes • Greater part of positioning processes shifting from user end to network centre with the ability to cope with hundreds of simultaneous users’ requests (reverse RTK) There are two major requirements for NRTK data processing based on the four challenges faced by future NRTK systems, expandable computing power and scalable data sharing/transferring capability. This research explores new approaches to address these future NRTK challenges and requirements using the Grid Computing facility, in particular for large data processing burdens and complex computation algorithms. A Grid Computing based NRTK framework is proposed in this research, which is a layered framework consisting of: 1) Client layer with the form of Grid portal; 2) Service layer; 3) Execution layer. The user’s request is passed through these layers, and scheduled to different Grid nodes in the network infrastructure. A proof-of-concept demonstration for the proposed framework is performed in a five-node Grid environment at QUT and also Grid Australia. The Networked Transport of RTCM via Internet Protocol (Ntrip) open source software is adopted to download real-time RTCM data from multiple reference stations through the Internet, followed by job scheduling and simplified RTK computing. The system performance has been analysed and the results have preliminarily demonstrated the concepts and functionality of the new NRTK framework based on Grid Computing, whilst some aspects of the performance of the system are yet to be improved in future work.
Resumo:
Purpose: This paper aims to show that identification of expectations and software functional requirements via consultation with potential users is an integral component of the development of an emergency department patient admissions prediction tool. ---------- Design/methodology/approach: Thematic analysis of semi-structured interviews with 14 key health staff delivered rich data regarding existing practice and future needs. Participants included emergency department staff, bed managers, nurse unit managers, directors of nursing, and personnel from health administration. ---------- Findings: Participants contributed contextual insights on the current system of admissions, revealing a culture of crisis, imbued with misplayed communication. Their expectations and requirements of a potential predictive tool provided strategic data that moderated the development of the Emergency Department Patient Admissions Prediction Tool, based on their insistence that it feature availability, reliability and relevance. In order to deliver these stipulations, participants stressed that it should be incorporated, validated, defined and timely. ---------- Research limitations/implications: Participants were envisaging a concept and use of a tool that was somewhat hypothetical. However, further research will evaluate the tool in practice. ---------- Practical implications: Participants' unsolicited recommendations regarding implementation will not only inform a subsequent phase of the tool evaluation, but are eminently applicable to any process of implementation in a healthcare setting. ---------- Originality/value: The consultative process engaged clinicians and the paper delivers an insider view of an overburdened system, rather than an outsider's observations.
Resumo:
During the last decade, globalisation and liberalisation of financial markets, changing societal expectations and corporate governance scandals have increased the attention for the fiduciary duties of non-executive directors. In this context, recent corporate governance reform initiatives have emphasised the control task and independence of non-executive directors. However, little attention has been paid to their impact on the external and internal service tasks of non-executive directors. Therefore, this paper investigates how the service tasks of non-executive directors have evolved in the Netherlands. Data on corporate governance at the top-100 listed companies in the Netherlands between 1997 and 2005 show that the emphasis on non-executive directors' external service task has shifted to their internal service task, i.e. from non-executive directors acting as boundary spanners to non-executive directors providing advice and counselling to executive directors. This shift in board responsibilities affects non-executive directors' ability to generate network benefits through board relationships and has implications for non-executive directors' functional requirements.
Resumo:
The study shows an alternative solution to existing efforts at solving the problem of how to centrally manage and synchronise users’ Multiple Profiles (MP) across multiple discrete social networks. Most social network users hold more than one social network account and utilise them in different ways depending on the digital context (Iannella, 2009a). They may, for example, enjoy friendly chat on Facebook1, professional discussion on LinkedIn2, and health information exchange on PatientsLikeMe3 In this thesis the researcher proposes a framework for the management of a user’s multiple online social network profiles. A demonstrator, called Multiple Profile Manager (MPM), will be showcased to illustrate how effective the framework will be. The MPM will achieve the required profile management and synchronisation using a free, open, decentralized social networking platform (OSW) that was proposed by the Vodafone Group in 2010. The proposed MPM will enable a user to create and manage an integrated profile (IP) and share/synchronise this profile with all their social networks. The necessary protocols to support the prototype are also proposed by the researcher. The MPM protocol specification defines an Extensible Messaging and Presence Protocol (XMPP) extension for sharing vCard and social network accounts information between the MPM Server, MPM Client, and social network sites (SNSs). . Therefore many web users need to manage disparate profiles across many distributed online sources. Maintaining these profiles is cumbersome, time-consuming, inefficient, and may lead to lost opportunity. The writer of this thesis adopted a research approach and a number of use cases for the implementation of the project. The use cases were created to capture the functional requirements of the MPM and to describe the interactions between users and the MPM. In the research a development process was followed in establishing the prototype and related protocols. The use cases were subsequently used to illustrate the prototype via the screenshots taken of the MPM client interfaces. The use cases also played a role in evaluating the outcomes of the research such as the framework, prototype, and the related protocols. An innovative application of this project is in the area of public health informatics. The researcher utilised the prototype to examine how the framework might benefit patients and physicians. The framework can greatly enhance health information management for patients and more importantly offer a more comprehensive personal health overview of patients to physicians. This will give a more complete picture of the patient’s background than is currently available and will prove helpful in providing the right treatment. The MPM prototype and related protocols have a high application value as they can be integrated into the real OSW platform and so serve users in the modern digital world. They also provide online users with a real platform for centrally storing their complete profile data, efficiently managing their personal information, and moreover, synchronising the overall complete profile with each of their discrete profiles stored in their different social network sites.
Resumo:
New substation automation applications, such as sampled value process buses and synchrophasors, require sampling accuracy of 1 µs or better. The Precision Time Protocol (PTP), IEEE Std 1588, achieves this level of performance and integrates well into Ethernet based substation networks. This paper takes a systematic approach to the performance evaluation of commercially available PTP devices (grandmaster, slave, transparent and boundary clocks) from a variety of manufacturers. The ``error budget'' is set by the performance requirements of each application. The ``expenditure'' of this error budget by each component is valuable information for a system designer. The component information is used to design a synchronization system that meets the overall functional requirements. The quantitative performance data presented shows that this testing is effective and informative. Results from testing PTP performance in the presence of sampled value process bus traffic demonstrate the benefit of a ``bottom up'' component testing approach combined with ``top down'' system verification tests. A test method that uses a precision Ethernet capture card, rather than dedicated PTP test sets, to determine the Correction Field Error of transparent clocks is presented. This test is particularly relevant for highly loaded Ethernet networks with stringent timing requirements. The methods presented can be used for development purposes by manufacturers, or by system integrators for acceptance testing. A sampled value process bus was used as the test application for the systematic approach described in this paper. The test approach was applied, components were selected, and the system performance verified to meet the application's requirements. Systematic testing, as presented in this paper, is applicable to a range of industries that use, rather than develop, PTP for time transfer.
Resumo:
Many software applications extend their functionality by dynamically loading executable components into their allocated address space. Such components, exemplified by browser plugins and other software add-ons, not only enable reusability, but also promote programming simplicity, as they reside in the same address space as their host application, supporting easy sharing of complex data structures and pointers. However, such components are also often of unknown provenance and quality and may be riddled with accidental bugs or, in some cases, deliberately malicious code. Statistics show that such component failures account for a high percentage of software crashes and vulnerabilities. Enabling isolation of such fine-grained components is therefore necessary to increase the stability, security and resilience of computer programs. This thesis addresses this issue by showing how host applications can create isolation domains for individual components, while preserving the benefits of a single address space, via a new architecture for software isolation called LibVM. Towards this end, we define a specification which outlines the functional requirements for LibVM, identify the conditions under which these functional requirements can be met, define an abstract Application Programming Interface (API) that encompasses the general problem of isolating shared libraries, thus separating policy from mechanism, and prove its practicality with two concrete implementations based on hardware virtualization and system call interpositioning, respectively. The results demonstrate that hardware isolation minimises the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution’s correctness. This thesis concludes that, not only is it feasible to create such isolation domains for individual components, but that it should also be a fundamental operating system supported abstraction, which would lead to more stable and secure applications.
Resumo:
Brain decoding of functional Magnetic Resonance Imaging data is a pattern analysis task that links brain activity patterns to the experimental conditions. Classifiers predict the neural states from the spatial and temporal pattern of brain activity extracted from multiple voxels in the functional images in a certain period of time. The prediction results offer insight into the nature of neural representations and cognitive mechanisms and the classification accuracy determines our confidence in understanding the relationship between brain activity and stimuli. In this paper, we compared the efficacy of three machine learning algorithms: neural network, support vector machines, and conditional random field to decode the visual stimuli or neural cognitive states from functional Magnetic Resonance data. Leave-one-out cross validation was performed to quantify the generalization accuracy of each algorithm on unseen data. The results indicated support vector machine and conditional random field have comparable performance and the potential of the latter is worthy of further investigation.