535 resultados para Computer Architecture


Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the large diffusion of Business Process Managemen (BPM) automation suites, the possibility of managing process-related risks arises. This paper introduces an innovative framework for process-related risk management and describes a working implementation realized by extending the YAWL system. The framework covers three aspects of risk management: risk monitoring, risk prevention, and risk mitigation. Risk monitoring functionality is provided using a sensor-based architecture, where sensors are defined at design time and used at run-time for monitoring purposes. Risk prevention functionality is provided in the form of suggestions about what should be executed, by who, and how, through the use of decision trees. Finally, risk mitigation functionality is provided as a sequence of remedial actions (e.g. reallocating, skipping, rolling back of a work item) that should be executed to restore the process to a normal situation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the major challenges in achieving long term robot autonomy is the need for a SLAM algorithm that can perform SLAM over the operational lifetime of the robot, preferably without human intervention or supervision. In this paper we present insights gained from a two week long persistent SLAM experiment, in which a Pioneer robot performed mock deliveries in a busy office environment. We used the biologically inspired visual SLAM system, RatSLAM, combined with a hybrid control architecture that selected between exploring the environment, performing deliveries, and recharging. The robot performed more than a thousand successful deliveries with only one failure (from which it recovered), travelled more than 40 km over 37 hours of active operation, and recharged autonomously 23 times. We discuss several issues arising from the success (and limitations) of this experiment and two subsequent avenues of work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This project investigates machine listening and improvisation in interactive music systems with the goal of improvising musically appropriate accompaniment to an audio stream in real-time. The input audio may be from a live musical ensemble, or playback of a recording for use by a DJ. I present a collection of robust techniques for machine listening in the context of Western popular dance music genres, and strategies of improvisation to allow for intuitive and musically salient interaction in live performance. The findings are embodied in a computational agent – the Jambot – capable of real-time musical improvisation in an ensemble setting. Conceptually the agent’s functionality is split into three domains: reception, analysis and generation. The project has resulted in novel techniques for addressing a range of issues in each of these domains. In the reception domain I present a novel suite of onset detection algorithms for real-time detection and classification of percussive onsets. This suite achieves reasonable discrimination between the kick, snare and hi-hat attacks of a standard drum-kit, with sufficiently low-latency to allow perceptually simultaneous triggering of accompaniment notes. The onset detection algorithms are designed to operate in the context of complex polyphonic audio. In the analysis domain I present novel beat-tracking and metre-induction algorithms that operate in real-time and are responsive to change in a live setting. I also present a novel analytic model of rhythm, based on musically salient features. This model informs the generation process, affording intuitive parametric control and allowing for the creation of a broad range of interesting rhythms. In the generation domain I present a novel improvisatory architecture drawing on theories of music perception, which provides a mechanism for the real-time generation of complementary accompaniment in an ensemble setting. All of these innovations have been combined into a computational agent – the Jambot, which is capable of producing improvised percussive musical accompaniment to an audio stream in real-time. I situate the architectural philosophy of the Jambot within contemporary debate regarding the nature of cognition and artificial intelligence, and argue for an approach to algorithmic improvisation that privileges the minimisation of cognitive dissonance in human-computer interaction. This thesis contains extensive written discussions of the Jambot and its component algorithms, along with some comparative analyses of aspects of its operation and aesthetic evaluations of its output. The accompanying CD contains the Jambot software, along with video documentation of experiments and performances conducted during the project.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

At St Thomas' Hospital, we have developed a computer program on a Titan graphics supercomputer to plan the stereotactic implantation of iodine-125 seeds for the palliative treatment of recurrent malignant gliomas. Use of the Gill-Thomas-Cosman relocatable frame allows planning and surgery to be carried out at different hospitals on different days. Stereotactic computed tomography (CT) and positron emission tomography (PET) scans are performed and the images transferred to the planning computer. The head, tumour and frame fiducials are outlined on the relevant images, and a three-dimensional model generated. Structures which could interfere with the surgery or radiotherapy, such as major vessels, shunt tubing etc., can also be outlined and included in the display. Catheter target and entry points are set using a three-dimensional cursor controlled by a set of dials attached to the computer. The program calculates and displays the radiation dose distribution within the target volume for various catheter and seed arrangements. The CT co-ordinates of the fiducial rods are used to convert catheter co-ordinates from CT space to frame space and to calculate the catheter insertion angles and depths. The surgically implanted catheters are after-loaded the next day and the seeds left in place for between 4 and 6 days, giving a nominal dose of 50 Gy to the edge of the target volume. 25 patients have been treated so far.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The "vernacular" housing tradition of southeast Queensland is easily identifiable. Its history is more complex. This study seeks to challenge two popular conceptions of the "Queenslander" history by showing that they actually provide contradictory explanations. The aim is to produce a more complex account of local architecture and its historical explanation so that both its past and its present practices can be better understood as a distinctly subtropical idiom. This discussion shows that such practices may respond to common concerns but that are also ever-changing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates the radically uncertain formal, business, and industrial environment of current entertainment creators. It researches how a novel communication technology, the Internet, leads to novel entertainment forms, how these lead to novel kinds of businesses that lead to novel industries; and in what way established entertainment forms, businesses, and industries are part of that process. This last aspect is addressed by focusing on one exemplary es-tablished form: movies. Using a transdisciplinary approach and a combination of historical analysis, industry interviews, and an innovative mode of ‘immersive’ textual analysis, a coherent and comprehensive conceptual framework for the creation of and re-search into a specific emerging entertainment form is proposed. That form, products based on it, and the conceptual framework describing it are all re-ferred to as Entertainment Architecture (‘entarch,’ for short). The thesis charac-terises this novel form as Internet-native transmedia entertainment, meaning it fully utilises the unique communicative characteristics of the Internet, and is spread across media. The thesis isolates four constitutive elements within Entertainment Architec-ture: story, play, ‘dance,’ and ‘glue.’ That is, entarch tells a story; offers playful interaction; invites social interaction between producer and consumer, and amongst consumers (‘dance’); and all components of it can be spread across many media, but are so well interconnected and mutually dependent that they are perceived as one product instead of many (‘glue’). This sets entarch apart from current media franchises like Star Wars or Halo, which are perceived as many products spread across many media. Entarch thus embraces the commu-nicative behaviour of Internet-native consumers instead of forcing them to de-sist from it, it harnesses the strengths of various media while avoiding some of their weaknesses, and it can sustain viable businesses. The entarch framework is an innovative contribution to scholarship that al-lows researchers to investigate this emerging entertainment form in a structured way. The thesis demonstrates this by using it to survey business models appro-priate to the entarch environment. The framework can also be used by enter-tainment creators — exemplified in the thesis by moviemakers — to delimit the room for manoeuvre available to them in a changing environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fusion techniques have received considerable attention for achieving lower error rates with biometrics. A fused classifier architecture based on sequential integration of multi-instance and multi-sample fusion schemes allows controlled trade-off between false alarms and false rejects. Expressions for each type of error for the fused system have previously been derived for the case of statistically independent classifier decisions. It is shown in this paper that the performance of this architecture can be improved by modelling the correlation between classifier decisions. Correlation modelling also enables better tuning of fusion model parameters, ‘N’, the number of classifiers and ‘M’, the number of attempts/samples, and facilitates the determination of error bounds for false rejects and false accepts for each specific user. Error trade-off performance of the architecture is evaluated using HMM based speaker verification on utterances of individual digits. Results show that performance is improved for the case of favourable correlated decisions. The architecture investigated here is directly applicable to speaker verification from spoken digit strings such as credit card numbers in telephone or voice over internet protocol based applications. It is also applicable to other biometric modalities such as finger prints and handwriting samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Enterprise architecture management (EAM) has become an intensively discussed approach to manage enterprise transformations. While many organizations employ EAM, a notable insecurity about the value of EAM remains. In this paper, we propose a model to measure the realization of benefits from EAM. We identify EAM success factors and EAM benefits through a comprehensive literature review and eleven explorative expert interviews. Based on our findings, we integrate the EAM success factors and benefits with the established DeLone & McLean IS success model resulting in a model that explains the realization of EAM benefits. This model aids organizations as a benchmark and framework for identifying and assessing the setup of their EAM initiatives and whether and how EAM benefits are materialized. We see our model also as a first step to gain insights in and start a discussion on the theory of EAM benefit realization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rapid prototyping environments can speed up the research of visual control algorithms. We have designed and implemented a software framework for fast prototyping of visual control algorithms for Micro Aerial Vehicles (MAV). We have applied a combination of a proxy-based network communication architecture and a custom Application Programming Interface. This allows multiple experimental configurations, like drone swarms or distributed processing of a drone's video stream. Currently, the framework supports a low-cost MAV: the Parrot AR.Drone. Real tests have been performed on this platform and the results show comparatively low figures of the extra communication delay introduced by the framework, while adding new functionalities and flexibility to the selected drone. This implementation is open-source and can be downloaded from www.vision4uav.com/?q=VC4MAV-FW

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fusion techniques have received considerable attention for achieving performance improvement with biometrics. While a multi-sample fusion architecture reduces false rejects, it also increases false accepts. This impact on performance also depends on the nature of subsequent attempts, i.e., random or adaptive. Expressions for error rates are presented and experimentally evaluated in this work by considering the multi-sample fusion architecture for text-dependent speaker verification using HMM based digit dependent speaker models. Analysis incorporating correlation modeling demonstrates that the use of adaptive samples improves overall fusion performance compared to randomly repeated samples. For a text dependent speaker verification system using digit strings, sequential decision fusion of seven instances with three random samples is shown to reduce the overall error of the verification system by 26% which can be further reduced by 6% for adaptive samples. This analysis novel in its treatment of random and adaptive multiple presentations within a sequential fused decision architecture, is also applicable to other biometric modalities such as finger prints and handwriting samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical dependence between classifier decisions is often shown to improve performance over statistically independent decisions. Though the solution for favourable dependence between two classifier decisions has been derived, the theoretical analysis for the general case of 'n' client and impostor decision fusion has not been presented before. This paper presents the expressions developed for favourable dependence of multi-instance and multi-sample fusion schemes that employ 'AND' and 'OR' rules. The expressions are experimentally evaluated by considering the proposed architecture for text-dependent speaker verification using HMM based digit dependent speaker models. The improvement in fusion performance is found to be higher when digit combinations with favourable client and impostor decisions are used for speaker verification. The total error rate of 20% for fusion of independent decisions is reduced to 2.1% for fusion of decisions that are favourable for both client and impostors. The expressions developed here are also applicable to other biometric modalities, such as finger prints and handwriting samples, for reliable identity verification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Enterprise architecture (EA) management has become an intensively discussed approach to manage enterprise transformations. Despite the popularity and potential of EA, both researchers and practitioners lament a lack of knowledge about the realization of benefits from EA. To determine the benefits from EA, we explore the various dimensions of EA benefit realization and report on the development of a validated and robust measurement instrument. In this paper, we test the reliability and construct validity of the EA benefit realization model (EABRM), which we have designed based on the DeLone & McLean IS success model and findings from exploratory interviews. A confirmatory factor analysis confirms the existence of an impact of five distinct and individually important dimensions on the benefits derived from EA: EA artefact quality, EA infrastructure quality, EA service quality, EA culture, and EA use. The analysis presented in this paper shows that the EA benefit realization model is an instrument that demonstrates strong reliability and validity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The integration of unmanned aircraft into civil airspace is a complex issue. One key question is whether unmanned aircraft can operate just as safely as their manned counterparts. The absence of a human pilot in unmanned aircraft automatically points to a deficiency that is the lack of an inherent see-and-avoid capability. To date, regulators have mandated that an “equivalent level of safety” be demonstrated before UAVs are permitted to routinely operate in civil airspace. This chapter proposes techniques, methods, and hardware integrations that describe a “sense-and-avoid” system designed to address the lack of a see-and-avoid capability in UAVs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Critical-sized bone defect regeneration is a remaining clinical concern. Numerous scaffold-based strategies are currently being investigated to enable in vivo bone defect healing. However, a deeper understanding of how a scaffold influences the tissue formation process and how this compares to endogenous bone formation or to regular fracture healing is missing. It is hypothesized that the porous scaffold architecture can serve as a guiding substrate to enable the formation of a structured fibrous network as a prerequirement for later bone formation. An ovine, tibial, 30-mm critical-sized defect is used as a model system to better understand the effect of the scaffold architecture on cell organization, fibrous tissue, and mineralized tissue formation mechanisms in vivo. Tissue regeneration patterns within two geometrically distinct macroscopic regions of a specific scaffold design, the scaffold wall and the endosteal cavity, are compared with tissue formation in an empty defect (negative control) and with cortical bone (positive control). Histology, backscattered electron imaging, scanning small-angle X-ray scattering, and nanoindentation are used to assess the morphology of fibrous and mineralized tissue, to measure the average mineral particle thickness and the degree of alignment, and to map the local elastic indentation modulus. The scaffold proves to function as a guiding substrate to the tissue formation process. It enables the arrangement of a structured fibrous tissue across the entire defect, which acts as a secondary supporting network for cells. Mineralization can then initiate along the fibrous network, resulting in bone ingrowth into a critical-sized defect, although not in complete bridging of the defect. The fibrous network morphology, which in turn is guided by the scaffold architecture, influences the microstructure of the newly formed bone. These results allow a deeper understanding of the mode of mineral tissue formation and the way this is influenced by the scaffold architecture. Copyright © 2012 American Society for Bone and Mineral Research.