910 resultados para Computational architectures
Resumo:
Computational journalism involves the application of software and technologies to the activities of journalism, and it draws from the fields of computer science, the social sciences, and media and communications. New technologies may enhance the traditional aims of journalism, or may initiate greater interaction between journalists and information and communication technology (ICT) specialists. The enhanced use of computing in news production is related in particular to three factors: larger government data sets becoming more widely available; the increasingly sophisticated and ubiquitous nature of software; and the developing digital economy. Drawing upon international examples, this paper argues that computational journalism techniques may provide new foundations for original investigative journalism and increase the scope for new forms of interaction with readers. Computer journalism provides a major opportunity to enhance the delivery of original investigative journalism, and to attract and retain readers online.
Resumo:
This chapter focuses on the interactions and roles between delays and intrinsic noise effects within cellular pathways and regulatory networks. We address these aspects by focusing on genetic regulatory networks that share a common network motif, namely the negative feedback loop, leading to oscillatory gene expression and protein levels. In this context, we discuss computational simulation algorithms for addressing the interplay of delays and noise within the signaling pathways based on biological data. We address implementational issues associated with efficiency and robustness. In a molecular biology setting we present two case studies of temporal models for the Hes1 gene (Monk, 2003; Hirata et al., 2002), known to act as a molecular clock, and the Her1/Her7 regulatory system controlling the periodic somite segmentation in vertebrate embryos (Giudicelli and Lewis, 2004; Horikawa et al., 2006).
Resumo:
Abstract—Computational Intelligence Systems (CIS) is one of advanced softwares. CIS has been important position for solving single-objective / reverse / inverse and multi-objective design problems in engineering. The paper hybridise a CIS for optimisation with the concept of Nash-Equilibrium as an optimisation pre-conditioner to accelerate the optimisation process. The hybridised CIS (Hybrid Intelligence System) coupled to the Finite Element Analysis (FEA) tool and one type of Computer Aided Design(CAD) system; GiD is applied to solve an inverse engineering design problem; reconstruction of High Lift Systems (HLS). Numerical results obtained by the hybridised CIS are compared to the results obtained by the original CIS. The benefits of using the concept of Nash-Equilibrium are clearly demonstrated in terms of solution accuracy and optimisation efficiency.
An experimental and computational investigation of performance of Green Gully for reusing stormwater
Resumo:
A new stormwater quality improvement device (SQID) called ‘Green Gully’ has been designed and developed in this study with an aim to re-using stormwater for irrigating plants and trees. The main purpose of the Green Gully is to collect road runoff/stormwater, make it suitable for irrigation and provide an automated network system for watering roadside plants and irrigational areas. This paper presents the design and development of Green Gully along with experimental and computational investigations of the performance of Green Gully. Performance (in the form of efficiency, i.e. the percentage of water flow through the gully grate) was experimentally determined using a gully model in the laboratory first, then a three dimensional numerical model was developed and simulated to predict the efficiency of Green Gully as a function of flow rate. Computational Fluid Dynamics (CFD) code FLUENT was used for the simulation. GAMBIT was used for geometry creation and mesh generation. Experimental and simulation results are discussed and compared in this paper. The predicted efficiency was compared with the laboratory measured efficiency. It was found that the simulated results are in good agreement with the experimental results.
Resumo:
In recent years, enterprise architecture (EA) has captured growing attention as a means to systematically consolidate and interrelate diverse IT artefacts in order to provide holistic decision support. Since the emergence of Service-Oriented Architecture (SOA), many attempts have been made to incorporate SOA artefacts in existing EA frameworks. Yet the approaches taken to achieve this goal differ substantially for the most commonly used EA frameworks to date. This paper investigates and compares five widely used EA frameworks in the way they embrace the SOA paradigm. It identifies what SOA artefacts are considered to be in the respective EA frameworks and their relative position in the overall structure. The results show that services and related artefacts are far from being well-integrated constructs in current EA frameworks. The comparison presented in this paper will support practitioners in identifying an EA framework that provides SOA support in a way that matches their requirements and will hopefully inspire the academic EA and SOA communities to work on a closer integration of these architectures.
Resumo:
Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.
Resumo:
We report on analysis of discussions in an online community of people with chronic illness using socio-cognitively motivated, automatically produced semantic spaces. The analysis aims to further the emerging theory of "transition" (how people can learn to incorporate the consequences of illness into their lives). An automatically derived representation of sense of self for individuals is created in the semantic space by the analysis of the email utterances of the community members. The movement over time of the sense of self is visualised, via projection, with respect to axes of "ordinariness" and "extra-ordinariness". Qualitative evaluation shows that the visualisation is paralleled by the transitions of people during the course of their illness. The research aims to progress tools for analysis of textual data to promote greater use of tacit knowledge as found in online virtual communities. We hope it also encourages further interest in representation of sense-of-self.
Resumo:
Cyclic nitroxide radicals represent promising alternatives to the iodine-based redox mediator commonly used in dye-sensitized solar cells (DSSCs). To date DSSCs with nitroxide-based redox mediators have achieved energy conversion efficiencies of just over 5 % but efficiencies of over 15 % might be achievable, given an appropriate mediator. The efficacy of the mediator depends upon two main factors: it must reversibly undergo one-electron oxidation and it must possess an oxidation potential in a range of 0.600-0.850 V (vs. a standard hydrogen electrode (SHE) in acetonitrile at 25 °C). Herein, we have examined the effect that structural modifications have on the value of the oxidation potential of cyclic nitroxides as well as the reversibility of the oxidation process. These included alterations to the N-containing skeleton (pyrrolidine, piperidine, isoindoline, azaphenalene, etc.), as well as the introduction of different substituents (alkyl-, methoxy-, amino-, carboxy-, etc.) to the ring. Standard oxidation potentials were calculated using high-level ab initio methodology that was demonstrated to be very accurate (with a mean absolute deviation from experimental values of only 16 mV). An optimal value of 1.45 for the electrostatic scaling factor for UAKS radii in acetonitrile solution was obtained. Established trends in the values of oxidation potentials were used to guide molecular design of stable nitroxides with desired E° ox and a number of compounds were suggested for potential use as enhanced redox mediators in DSSCs. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
Purpose: The management of unruptured aneurysms remains controversial as treatment infers potential significant risk to the currently well patient. The decision to treat is based upon aneurysm location, size and abnormal morphology (e.g. bleb formation). A method to predict bleb formation would thus help stratify patient treatment. Our study aims to investigate possible associations between intra-aneurysmal flow dynamics and bleb formation within intracranial aneurysms. Competing theories on aetiology appear in the literature. Our purpose is to further clarify this issue. Methodology: We recruited data from 3D rotational angiograms (3DRA) of 30 patients with cerebral aneurysms and bleb formation. Models representing aneurysms pre-bleb formation were reconstructed by digitally removing the bleb, then computational fluid dynamics simulations were run on both pre and post bleb models. Pulsatile flow conditions and standard boundary conditions were imposed. Results: Aneurysmal flow structure, impingement regions, wall shear stress magnitude and gradients were produced for all models. Correlation of these parameters with bleb formation was sought. Certain CFD parameters show significant inter patient variability, making statistically significant correlation difficult on the partial data subset obtained currently. Conclusion: CFD models are readily producible from 3DRA data. Preliminary results indicate bleb formation appears to be related to regions of high wall shear stress and direct impingement regions of the aneurysm wall.
Resumo:
In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays (FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri-diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri-Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.
Resumo:
The feasibility of using an in-hardware implementation of a genetic algorithm (GA) to solve the computationally expensive travelling salesman problem (TSP) is explored, especially in regard to hardware resource requirements for problem and population sizes. We investigate via numerical experiments whether a small population size might prove sufficient to obtain reasonable quality solutions for the TSP, thereby permitting relatively resource efficient hardware implementation on field programmable gate arrays (FPGAs). Software experiments on two TSP benchmarks involving 48 and 532 cities were used to explore the extent to which population size can be reduced without compromising solution quality, and results show that a GA allowed to run for a large number of generations with a smaller population size can yield solutions of comparable quality to those obtained using a larger population. This finding is then used to investigate feasible problem sizes on a targeted Virtex-7 vx485T-2 FPGA platform via exploration of hardware resource requirements for memory and data flow operations.
Resumo:
Image representations derived from simplified models of the primary visual cortex (V1), such as HOG and SIFT, elicit good performance in a myriad of visual classification tasks including object recognition/detection, pedestrian detection and facial expression classification. A central question in the vision, learning and neuroscience communities regards why these architectures perform so well. In this paper, we offer a unique perspective to this question by subsuming the role of V1-inspired features directly within a linear support vector machine (SVM). We demonstrate that a specific class of such features in conjunction with a linear SVM can be reinterpreted as inducing a weighted margin on the Kronecker basis expansion of an image. This new viewpoint on the role of V1-inspired features allows us to answer fundamental questions on the uniqueness and redundancies of these features, and offer substantial improvements in terms of computational and storage efficiency.