193 resultados para Reti calcolatori Protocolli comunicazione Gerarchie protocolli Software Defined Networking Internet
Resumo:
Teachers are under increasing pressure from government and school management to incorporate technology into lessons. They need to consider which technologies can most effectively enhance subject learning, encourage higher order thinking skills and support the performance of authentic tasks. This chapter reviews the practical and theoretical tools that have been developed to aid teachers in selecting software and reviews the software assessment methodologies from the 1980s to the present day. It concludes that teachers need guidance to structure the evaluation of technology, to consider its educational affordances, its usability, its suitability for the students and the classroom environment and its fit to the teachers’ preferred pedagogies.
Resumo:
Self-segregation and compartimentalisation are observed experimentally to occur spontaneously on live membranes as well as reconstructed model membranes. It is believed that many of these processes are caused or supported by anomalous diffusive behaviours of biomolecules on membranes due to the complex and heterogeneous nature of these environments. These phenomena are on the one hand of great interest in biology, since they may be an important way for biological systems to selectively localize receptors, regulate signaling or modulate kinetics; and on the other, they provide an inspiration for engineering designs that mimick natural systems. We present an interactive software package we are developing for the purpose of simulating such processes numerically using a fundamental Monte Carlo approach. This program includes the ability to simulate kinetics and mass transport in the presence of either mobile or immobile obstacles and other relevant structures such as liquid-ordered lipid microdomains. We also present preliminary simulation results regarding the selective spatial localization and chemical kinetics modulating power of immobile obstacles on the membrane, obtained using the program.
Resumo:
This paper defines and discusses two contrasting approaches to designing game environments. The first, referred to as scripting, requires developers to anticipate, hand-craft and script specific game objects, events and player interactions. The second, known as emergence, involves defining general, global rules that interact to give rise to emergent gameplay. Each of these approaches is defined, discussed and analyzed with respect to the considerations and affects for game developers and game players. Subsequently, various techniques for implementing these design approaches are identified and discussed. It is concluded that scripting and emergence are two extremes of the same continuum, neither of which are ideal for game development. Rather, there needs to be a compromise in which the boundaries of action (such as story and game objectives) can be hardcoded and non-scripted behavior (such as interactions and strategies) are able to emerge within these boundaries.
Resumo:
A new approach to pattern recognition using invariant parameters based on higher order spectra is presented. In particular, invariant parameters derived from the bispectrum are used to classify one-dimensional shapes. The bispectrum, which is translation invariant, is integrated along straight lines passing through the origin in bifrequency space. The phase of the integrated bispectrum is shown to be scale and amplification invariant, as well. A minimal set of these invariants is selected as the feature vector for pattern classification, and a minimum distance classifier using a statistical distance measure is used to classify test patterns. The classification technique is shown to distinguish two similar, but different bolts given their one-dimensional profiles. Pattern recognition using higher order spectral invariants is fast, suited for parallel implementation, and has high immunity to additive Gaussian noise. Simulation results show very high classification accuracy, even for low signal-to-noise ratios.
Resumo:
This paper examines the affordances of the philosophy and practice of open source and the application of it in developing music education software. In particular I will examine the parallels inherent in the ‘openness’ of pragmatist philosophy in education (Dewey 1916, 1989) such as group or collaborative learning, discovery learning (Bruner 1966) and learning through creative activity with computers (Papert 1980, 1994). Primarily I am interested in ‘relational pedagogies’ (Ruthmann and Dillon In Press) which is in a real sense about the ethics of the transaction between student and teacher in an ecology where technology plays a more significant role. In these contexts relational pedagogies refers to how the music teacher manages their relationships with students and evaluates the affordances of open source technology in that process. It is concerned directly with how the relationship between student and teacher is affected by the technological tools, as is the capacity for music making and learning. In particular technologies that have agency present the opportunity for a partnership between user and technology that enhances the capacity for expressive music making, productive social interaction and learning. In this instance technologies with agency are defined as ones that enhance the capacity to be expressive and perform tasks with virtuosity and complexity where the technology translates simple commands and gestures into complex outcomes. The technology enacts a partnership with the user that becomes both a cognitive and performative amplifier. Specifically we have used this term to describe interactions with generative technologies that use procedural invention as a creative technique to produce music and visual media.
Resumo:
Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.
Resumo:
Software forms an important part of the interface between citizens and their government. An increasing amount of government functions are being performed, controlled, or delivered electronically. This software, like all language, is never value-neutral, but must, to some extent, reflect the values of the coder and proprietor. The move that many governments are making towards e-governance, and the increasing reliance that is being placed upon software in government, necessitates a rethinking of the relationships of power and control that are embodied in software.
Resumo:
This research is one of several ongoing studies conducted within the IT Professional Services (ITPS) research programme at Queensland University of Technology (QUT). In 2003, ITPS introduced the IS-Impact model, a measurement model for measuring information systems success from the viewpoint of multiple stakeholders. The model, along with its instrument, is robust, simple, yet generalisable, and yields results that are comparable across time, stakeholders, different systems and system contexts. The IS-Impact model is defined as “a measure at a point in time, of the stream of net benefits from the Information System (IS), to date and anticipated, as perceived by all key-user-groups”. The model represents four dimensions, which are ‘Individual Impact’, ‘Organizational Impact’, ‘Information Quality’ and ‘System Quality’. The two Impact dimensions measure the up-to-date impact of the evaluated system, while the remaining two Quality dimensions act as proxies for probable future impacts (Gable, Sedera & Chan, 2008). To fulfil the goal of ITPS, “to develop the most widely employed model” this research re-validates and extends the IS-Impact model in a new context. This method/context-extension research aims to test the generalisability of the model by addressing known limitations of the model. One of the limitations of the model relates to the extent of external validity of the model. In order to gain wide acceptance, a model should be consistent and work well in different contexts. The IS-Impact model, however, was only validated in the Australian context, and packaged software was chosen as the IS understudy. Thus, this study is concerned with whether the model can be applied in another different context. Aiming for a robust and standardised measurement model that can be used across different contexts, this research re-validates and extends the IS-Impact model and its instrument to public sector organisations in Malaysia. The overarching research question (managerial question) of this research is “How can public sector organisations in Malaysia measure the impact of information systems systematically and effectively?” With two main objectives, the managerial question is broken down into two specific research questions. The first research question addresses the applicability (relevance) of the dimensions and measures of the IS-Impact model in the Malaysian context. Moreover, this research question addresses the completeness of the model in the new context. Initially, this research assumes that the dimensions and measures of the IS-Impact model are sufficient for the new context. However, some IS researchers suggest that the selection of measures needs to be done purposely for different contextual settings (DeLone & McLean, 1992, Rai, Lang & Welker, 2002). Thus, the first research question is as follows, “Is the IS-Impact model complete for measuring the impact of IS in Malaysian public sector organisations?” [RQ1]. The IS-Impact model is a multidimensional model that consists of four dimensions or constructs. Each dimension is represented by formative measures or indicators. Formative measures are known as composite variables because these measures make up or form the construct, or, in this case, the dimension in the IS-Impact model. These formative measures define different aspects of the dimension, thus, a measurement model of this kind needs to be tested not just on the structural relationship between the constructs but also the validity of each measure. In a previous study, the IS-Impact model was validated using formative validation techniques, as proposed in the literature (i.e., Diamantopoulos and Winklhofer, 2001, Diamantopoulos and Siguaw, 2006, Petter, Straub and Rai, 2007). However, there is potential for improving the validation testing of the model by adding more criterion or dependent variables. This includes identifying a consequence of the IS-Impact construct for the purpose of validation. Moreover, a different approach is employed in this research, whereby the validity of the model is tested using the Partial Least Squares (PLS) method, a component-based structural equation modelling (SEM) technique. Thus, the second research question addresses the construct validation of the IS-Impact model; “Is the IS-Impact model valid as a multidimensional formative construct?” [RQ2]. This study employs two rounds of surveys, each having a different and specific aim. The first is qualitative and exploratory, aiming to investigate the applicability and sufficiency of the IS-Impact dimensions and measures in the new context. This survey was conducted in a state government in Malaysia. A total of 77 valid responses were received, yielding 278 impact statements. The results from the qualitative analysis demonstrate the applicability of most of the IS-Impact measures. The analysis also shows a significant new measure having emerged from the context. This new measure was added as one of the System Quality measures. The second survey is a quantitative survey that aims to operationalise the measures identified from the qualitative analysis and rigorously validate the model. This survey was conducted in four state governments (including the state government that was involved in the first survey). A total of 254 valid responses were used in the data analysis. Data was analysed using structural equation modelling techniques, following the guidelines for formative construct validation, to test the validity and reliability of the constructs in the model. This study is the first research that extends the complete IS-Impact model in a new context that is different in terms of nationality, language and the type of information system (IS). The main contribution of this research is to present a comprehensive, up-to-date IS-Impact model, which has been validated in the new context. The study has accomplished its purpose of testing the generalisability of the IS-Impact model and continuing the IS evaluation research by extending it in the Malaysian context. A further contribution is a validated Malaysian language IS-Impact measurement instrument. It is hoped that the validated Malaysian IS-Impact instrument will encourage related IS research in Malaysia, and that the demonstrated model validity and generalisability will encourage a cumulative tradition of research previously not possible. The study entailed several methodological improvements on prior work, including: (1) new criterion measures for the overall IS-Impact construct employed in ‘identification through measurement relations’; (2) a stronger, multi-item ‘Satisfaction’ construct, employed in ‘identification through structural relations’; (3) an alternative version of the main survey instrument in which items are randomized (rather than blocked) for comparison with the main survey data, in attention to possible common method variance (no significant differences between these two survey instruments were observed); (4) demonstrates a validation process of formative indexes of a multidimensional, second-order construct (existing examples mostly involved unidimensional constructs); (5) testing the presence of suppressor effects that influence the significance of some measures and dimensions in the model; and (6) demonstrates the effect of an imbalanced number of measures within a construct to the contribution power of each dimension in a multidimensional model.
Resumo:
Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.
Resumo:
This paper examines the integration of computing technologies into music education research in a way informed by constructivism. In particular, this paper focuses on an approach established by Jeanne Bamberger, which the author also employs, that integrates software design, pedagogical exploration, and the building of music education theory. In this tradition, researchers design software and associated activities to facilitate the interactive manipulation of musical structures and ideas. In short, this approach focuses on designing experiences and tools that support musical thinking and doing. In comparing the work of Jean Bamberger with that of the author, this paper highlights and discusses issues of significance and identifies lessons for future research.
Resumo:
Projects funded by the Australian National Data Service(ANDS). The specific projects that were funded included: a) Greenhouse Gas Emissions Project (N2O) with Prof. Peter Grace from QUT’s Institute of Sustainable Resources. b) Q150 Project for the management of multimedia data collected at Festival events with Prof. Phil Graham from QUT’s Institute of Creative Industries. c) Bio-diversity environmental sensing with Prof. Paul Roe from the QUT Microsoft eResearch Centre. For the purposes of these projects the Eclipse Rich Client Platform (Eclipse RCP) was chosen as an appropriate software development framework within which to develop the respective software. This poster will present a brief overview of the requirements of the projects, an overview of the experiences of the project team in using Eclipse RCP, report on the advantages and disadvantages of using Eclipse and it’s perspective on Eclipse as an integrated tool for supporting future data management requirements.
Resumo:
A service-oriented system is composed of independent software units, namely services, that interact with one another exclusively through message exchanges. The proper functioning of such system depends on whether or not each individual service behaves as the other services expect it to behave. Since services may be developed and operated independently, it is unrealistic to assume that this is always the case. This article addresses the problem of checking and quantifying how much the actual behavior of a service, as recorded in message logs, conforms to the expected behavior as specified in a process model.We consider the case where the expected behavior is defined using the BPEL industry standard (Business Process Execution Language for Web Services). BPEL process definitions are translated into Petri nets and Petri net-based conformance checking techniques are applied to derive two complementary indicators of conformance: fitness and appropriateness. The approach has been implemented in a toolset for business process analysis and mining, namely ProM, and has been tested in an environment comprising multiple Oracle BPEL servers.
Resumo:
Finite element analyses of the human body in seated postures requires digital models capable of providing accurate and precise prediction of the tissue-level response of the body in the seated posture. To achieve such models, the human anatomy must be represented with high fidelity. This information can readily be defined using medical imaging techniques such as Magnetic Resonance Imaging (MRI) or Computed Tomography (CT). Current practices for constructing digital human models, based on the magnetic resonance (MR) images, in a lying down (supine) posture have reduced the error in the geometric representation of human anatomy relative to reconstructions based on data from cadaveric studies. Nonetheless, the significant differences between seated and supine postures in segment orientation, soft-tissue deformation and soft tissue strain create a need for data obtained in postures more similar to the application posture. In this study, we present a novel method for creating digital human models based on seated MR data. An adult-male volunteer was scanned in a simulated driving posture using a FONAR 0.6T upright MRI scanner with a T1 scanning protocol. To compensate for unavoidable image distortion near the edges of the study, images of the same anatomical structures were obtained in transverse and sagittal planes. Combinations of transverse and sagittal images were used to reconstruct the major anatomical features from the buttocks through the knees, including bone, muscle and fat tissue perimeters, using Solidworks® software. For each MR image, B-splines were created as contours for the anatomical structures of interest, and LOFT commands were used to interpolate between the generated Bsplines. The reconstruction of the pelvis, from MR data, was enhanced by the use of a template model generated in previous work CT images. A non-rigid registration algorithm was used to fit the pelvis template into the MR data. Additionally, MR image processing was conducted to both the left and the right sides of the model due to the intended asymmetric posture of the volunteer during the MR measurements. The presented subject-specific, three-dimensional model of the buttocks and thighs will add value to optimisation cycles in automotive seat development when used in simulating human interaction with automotive seats.
Resumo:
Virtual prototyping emerges as a new technology to replace existing physical prototypes for product evaluation, which are costly and time consuming to manufacture. Virtualization technology allows engineers and ergonomists to perform virtual builds and different ergonomic analyses on a product. Digital Human Modelling (DHM) software packages such as Siemens Jack, often integrate with CAD systems to provide a virtual environment which allows investigation of operator and product compatibility. Although the integration between DHM and CAD systems allows for the ergonomic analysis of anthropometric design, human musculoskeletal, multi-body modelling software packages such as the AnyBody Modelling System (AMS) are required to support physiologic design. They provide muscular force analysis, estimate human musculoskeletal strain and help address human comfort assessment. However, the independent characteristics of the modelling systems Jack and AMS constrain engineers and ergonomists in conducting a complete ergonomic analysis. AMS is a stand alone programming system without a capability to integrate into CAD environments. Jack is providing CAD integrated human-in-the-loop capability, but without considering musculoskeletal activity. Consequently, engineers and ergonomists need to perform many redundant tasks during product and process design. Besides, the existing biomechanical model in AMS uses a simplified estimation of body proportions, based on a segment mass ratio derived scaling approach. This is insufficient to represent user populations anthropometrically correct in AMS. In addition, sub-models are derived from different sources of morphologic data and are therefore anthropometrically inconsistent. Therefore, an interface between the biomechanical AMS and the virtual human model Jack was developed to integrate a musculoskeletal simulation with Jack posture modeling. This interface provides direct data exchange between the two man-models, based on a consistent data structure and common body model. The study assesses kinematic and biomechanical model characteristics of Jack and AMS, and defines an appropriate biomechanical model. The information content for interfacing the two systems is defined and a protocol is identified. The interface program is developed and implemented through Tcl and Jack-script(Python), and interacts with the AMS console application to operate AMS procedures.
Resumo:
Since the availability of 3D full body scanners and the associated software systems for operations with large point clouds, 3D anthropometry has been marketed as a breakthrough and milestone in ergonomic design. The assumptions made by the representatives of the 3D paradigm need to be critically reviewed though. 3D anthropometry has advantages as well as shortfalls, which need to be carefully considered. While it is apparent that the measurement of a full body point cloud allows for easier storage of raw data and improves quality control, the difficulties in calculation of standardized measurements from the point cloud are widely underestimated. Early studies that made use of 3D point clouds to derive anthropometric dimensions have shown unacceptable deviations from the standardized results measured manually. While 3D human point clouds provide a valuable tool to replicate specific single persons for further virtual studies, or personalize garment, their use in ergonomic design must be critically assessed. Ergonomic, volumetric problems are defined by their 2-dimensional boundary or one dimensional sections. A 1D/2D approach is therefore sufficient to solve an ergonomic design problem. As a consequence, all modern 3D human manikins are defined by the underlying anthropometric girths (2D) and lengths/widths (1D), which can be measured efficiently using manual techniques. Traditionally, Ergonomists have taken a statistical approach to design for generalized percentiles of the population rather than for a single user. The underlying method is based on the distribution function of meaningful single and two-dimensional anthropometric variables. Compared to these variables, the distribution of human volume has no ergonomic relevance. On the other hand, if volume is to be seen as a two-dimensional integral or distribution function of length and girth, the calculation of combined percentiles – a common ergonomic requirement - is undefined. Consequently, we suggest to critically review the cost and use of 3D anthropometry. We also recommend making proper use of widely available single and 2-dimensional anthropometric data in ergonomic design.