988 resultados para Scenario Programming, Markup Languages, 3D Virtualworlds


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Managed execution frameworks, such as the.NET Common Language Runtime or the Java Virtual Machine, provide a rich environment for the creation of application programs. These execution environments are ideally suited for languages that depend on type-safety and the declarative control of feature access. Furthermore, such frameworks typically provide a rich collection of library primitives specialized for almost every domain of application programming. Thus, when a new language is implemented on one of these frameworks it becomes necessary to provide some kind of mapping from the new language to the libraries of the framework. The design of such mappings is challenging since the type-system of the new language may not span the domain exposed in the library application programming interfaces (APIs). The nature of these design considerations was clarified in the implementation of the Gardens Point Component Pascal (gpcp) compiler. In this paper we describe the issues, and the solutions that we settled on in this case. The problems that were solved have a wider applicability than just our example, since they arise whenever any similar language is hosted in such an environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interactive development environments are making a resurgence. The traditional batch style of programming, edit -> compile -> run, is slowly being reevaluated by the development community at large. Languages such as Perl, Python and Ruby are at the heart of a new programming culture commonly described as extreme, agile or dynamic. Musicians are also beginning to embrace these environments and to investigate the opportunity to use dynamic programming tools in live performance. This paper provides an introduction to Impromptu, a new interactive development environment for musicians and sound artists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Component software has many benefits, most notably increased software re-use; however, the component software process places heavy burdens on programming language technology, which modern object-oriented programming languages do not address. In particular, software components require specifications that are both sufficiently expressive and sufficiently abstract, and, where possible, these specifications should be checked formally by the programming language. This dissertation presents a programming language called Mentok that provides two novel programming language features enabling improved specification of stateful component roles. Negotiable interfaces are interface types extended with protocols, and allow specification of changing method availability, including some patterns of out-calls and re-entrance. Type layers are extensions to module signatures that allow specification of abstract control flow constraints through the interfaces of a component-based application. Development of Mentok's unique language features included creation of MentokC, the Mentok compiler, and formalization of key properties of Mentok in mini-languages called MentokP and MentokL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embedded generalized markup, as applied by digital humanists to the recording and studying of our textual cultural heritage, suffers from a number of serious technical drawbacks. As a result of its evolution from early printer control languages, generalized markup can only express a document’s ‘logical’ structure via a repertoire of permissible printed format structures. In addition to the well-researched overlap problem, the embedding of markup codes into texts that never had them when written leads to a number of further difficulties: the inclusion of potentially obsolescent technical and subjective information into texts that are supposed to be archivable for the long term, the manual encoding of information that could be better computed automatically, and the obscuring of the text by highly complex technical data. Many of these problems can be alleviated by asserting a separation between the versions of which many cultural heritage texts are composed, and their content. In this way the complex inter-connections between versions can be handled automatically, leaving only simple markup for individual versions to be handled by the user.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the emergence of multi-core processors into the mainstream, parallel programming is no longer the specialized domain it once was. There is a growing need for systems to allow programmers to more easily reason about data dependencies and inherent parallelism in general purpose programs. Many of these programs are written in popular imperative programming languages like Java and C]. In this thesis I present a system for reasoning about side-effects of evaluation in an abstract and composable manner that is suitable for use by both programmers and automated tools such as compilers. The goal of developing such a system is to both facilitate the automatic exploitation of the inherent parallelism present in imperative programs and to allow programmers to reason about dependencies which may be limiting the parallelism available for exploitation in their applications. Previous work on languages and type systems for parallel computing has tended to focus on providing the programmer with tools to facilitate the manual parallelization of programs; programmers must decide when and where it is safe to employ parallelism without the assistance of the compiler or other automated tools. None of the existing systems combine abstraction and composition with parallelization and correctness checking to produce a framework which helps both programmers and automated tools to reason about inherent parallelism. In this work I present a system for abstractly reasoning about side-effects and data dependencies in modern, imperative, object-oriented languages using a type and effect system based on ideas from Ownership Types. I have developed sufficient conditions for the safe, automated detection and exploitation of a number task, data and loop parallelism patterns in terms of ownership relationships. To validate my work, I have applied my ideas to the C] version 3.0 language to produce a language extension called Zal. I have implemented a compiler for the Zal language as an extension of the GPC] research compiler as a proof of concept of my system. I have used it to parallelize a number of real-world applications to demonstrate the feasibility of my proposed approach. In addition to this empirical validation, I present an argument for the correctness of the type system and language semantics I have proposed as well as sketches of proofs for the correctness of the sufficient conditions for parallelization proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The act of computer programming is generally considered to be temporally removed from a computer program’s execution. In this paper we discuss the idea of programming as an activity that takes place within the temporal bounds of a real-time computational process and its interactions with the physical world. We ground these ideas within the context of livecoding – a live audiovisual performance practice. We then describe how the development of the programming environment “Impromptu” has addressed our ideas of programming with time and the notion of the programmer as an agent in a cyber-physical system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the problem of reconstructing the motion of a 3D articulated tree from 2D point correspondences subject to some temporal prior. Hitherto, smooth motion has been encouraged using a trajectory basis, yielding a hard combinatorial problem with time complexity growing exponentially in the number of frames. Branch and bound strategies have previously attempted to curb this complexity whilst maintaining global optimality. However, they provide no guarantee of being more efficient than exhaustive search. Inspired by recent work which reconstructs general trajectories using compact high-pass filters, we develop a dynamic programming approach which scales linearly in the number of frames, leveraging the intrinsically local nature of filter interactions. Extension to affine projection enables reconstruction without estimating cameras.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with the optimal path planning and initialization interval of one or two UAVs in presence of a constant wind. The method compares previous literature results on synchronization of UAVs along convex curves, path planning and sampling in 2D and extends it to 3D. This method can be applied to observe gas/particle emissions inside a control volume during sampling loops. The flight pattern is composed of two phases: a start-up interval and a sampling interval which is represented by a semi-circular path. The methods were tested in four complex model test cases in 2D and 3D as well as one simulated real world scenario in 2D and one in 3D.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fast restoration of critical loads and non-black-start generators can significantly reduce the economic losses caused by power system blackouts. In a parallel power system restoration scenario, the sectionalization of restoration subsystems plays a very important role in determining the pickup of critical loads before synchronization. Most existing research mainly focuses on the startup of non-black-start generators. The restoration of critical loads, especially the loads with cold load characteristics, has not yet been addressed in optimizing the subsystem divisions. As a result, sectionalized restoration subsystems cannot achieve the best coordination between the pickup of loads and the ramping of generators. In order to generate sectionalizing strategies considering the pickup of critical loads in parallel power system restoration scenarios, an optimization model considering power system constraints, the characteristics of the cold load pickup and the features of generator startup is proposed in this paper. A bi-level programming approach is employed to solve the proposed sectionalizing model. In the upper level the optimal sectionalizing problem for the restoration subsystems is addressed, while in the lower level the objective is to minimize the outage durations of critical loads. The proposed sectionalizing model has been validated by the New-England 39-bus system and the IEEE 118-bus system. Further comparisons with some existing methods are carried out as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning mathematics is a complex and dynamic process. In this paper, the authors adopt a semiotic framework (Yeh & Nason, 2004) and highlight programming as one of the main aspects of the semiosis or meaning-making for the learning of mathematics. During a 10-week teaching experiment, mathematical meaning-making was enriched when primary students wrote Logo programs to create 3D virtual worlds. The analysis of results found deep learning in mathematics, as well as in technology and engineering areas. This prompted a rethinking about the nature of learning mathematics and a need to employ and examine a more holistic learning approach for the learning in science, technology, engineering, and mathematics (STEM) areas.