959 resultados para Formal model
Resumo:
Standardised testing does not recognise the creativity and skills of marginalised youth. Young people who come to the Edmund Rice Education Australia Flexible Learning Centre Network (EREAFLCN) in Australia arrive with forms of cultural capital that are not valued in the field of education and employment. This is not to say that young people‟s different modes of cultural capital have no value, but rather that such funds of knowledge, repertoires and cultural capital are not valued by the powerful agents in educational and employment fields. The forms of cultural capital which are valued by these institutions are measurable in certain structured formats which are largely inaccessible for what is seen in Australia to be a growing segment of the community. How then can the inherent value of traditionally unorthodox - yet often intricate, adroit, ingenious, and astute - versions of cultural capital evident in the habitus of many young people be made to count, be recognised, be valuated? Can a process of educational assessment be used as a marketplace, a field of capital exchange? This paper reports on the development of an innovative approach to assessment in an alternative education institution designed for the re-engagement of „at risk‟ youth who have left formal schooling. In order to capture the broad range of students‟ cultural and social capital, an electronic portfolio system (EPS) is under trial. The model draws on categories from sociological models of capital and reconceptualises the eportfolio as a sociocultural zone of learning and development. Initial results from the trial show a general tendency towards engagement with the EPS and potential for the attainment of socially valued cultural capital in the form of school credentials.
Resumo:
The improvement and optimization of business processes is one of the top priorities in an organization. Although process analysis methods are mature today, business analysts and stakeholders are still hampered by communication issues. That is, analysts cannot effectively obtain accurate business requirements from stakeholders, and stakeholders are often confused about analytic results offered by analysts. We argue that using a virtual world to model a business process can benefit communication activities. We believe that virtual worlds can be used as an efficient model-view approach, increasing the cognition of business requirements and analytic results, as well as the possibility of business plan validation. A healthcare case study is provided as an approach instance, illustrating how intuitive such an approach can be. As an exploration paper, we believe that this promising research can encourage people to investigate more research topics in the interdisciplinary area of information system, visualization and multi-user virtual worlds.
Resumo:
Kinematic models are commonly used to quantify foot and ankle kinematics, yet no marker sets or models have been proven reliable or accurate when wearing shoes. Further, the minimal detectable difference of a developed model is often not reported. We present a kinematic model that is reliable, accurate and sensitive to describe the kinematics of the foot–shoe complex and lower leg during walking gait. In order to achieve this, a new marker set was established, consisting of 25 markers applied on the shoe and skin surface, which informed a four segment kinematic model of the foot–shoe complex and lower leg. Three independent experiments were conducted to determine the reliability, accuracy and minimal detectable difference of the marker set and model. Inter-rater reliability of marker placement on the shoe was proven to be good to excellent (ICC = 0.75–0.98) indicating that markers could be applied reliably between raters. Intra-rater reliability was better for the experienced rater (ICC = 0.68–0.99) than the inexperienced rater (ICC = 0.38–0.97). The accuracy of marker placement along each axis was <6.7 mm for all markers studied. Minimal detectable difference (MDD90) thresholds were defined for each joint; tibiocalcaneal joint – MDD90 = 2.17–9.36°, tarsometatarsal joint – MDD90 = 1.03–9.29° and the metatarsophalangeal joint – MDD90 = 1.75–9.12°. These thresholds proposed are specific for the description of shod motion, and can be used in future research designed at comparing between different footwear.
Resumo:
This chapter reviews the incidence of coverage of Papua New Guinea affairs in the Australian press and in Australian broadcast media. It presents the findings of a formal monitoring of selected newspaper coverage and news broadcasts of the leading Australian television and radio outlets. The study also includes news stories published on ABC Online. The findings for print media suggest that coverage of PNG is inadequate and may be contributing towards negative images of that country in Australia. The broadcast monitoring found also that beyond the ABC's regular and balanced coverage, there was very little mention of PNG on Australian airwaves. The deployment of resources by the ABC was seen as a potential model for increased quantity and quality of coverage, with its maintenance of a correspondent and office in the country, and use of reports from PNG across a wide range of programs. The investigation noted some early indications of a shift in media attention, following the election of a new government in Australia in 2007, which gave some priority attention to PNG including a visit by the then Prime Minister.
Resumo:
In gait analysis, both shoe mounted and skin mounted markers have been used to quantify the movement of the foot inside the shoe. However, these models have not been demonstrated as reliable or accurate in shod conditions. The purpose of this study was to develop an accurate and reliable marker set to describe foot-shoe complex kinematics during stance phase.
Resumo:
Finite element analyses of the human body in seated postures requires digital models capable of providing accurate and precise prediction of the tissue-level response of the body in the seated posture. To achieve such models, the human anatomy must be represented with high fidelity. This information can readily be defined using medical imaging techniques such as Magnetic Resonance Imaging (MRI) or Computed Tomography (CT). Current practices for constructing digital human models, based on the magnetic resonance (MR) images, in a lying down (supine) posture have reduced the error in the geometric representation of human anatomy relative to reconstructions based on data from cadaveric studies. Nonetheless, the significant differences between seated and supine postures in segment orientation, soft-tissue deformation and soft tissue strain create a need for data obtained in postures more similar to the application posture. In this study, we present a novel method for creating digital human models based on seated MR data. An adult-male volunteer was scanned in a simulated driving posture using a FONAR 0.6T upright MRI scanner with a T1 scanning protocol. To compensate for unavoidable image distortion near the edges of the study, images of the same anatomical structures were obtained in transverse and sagittal planes. Combinations of transverse and sagittal images were used to reconstruct the major anatomical features from the buttocks through the knees, including bone, muscle and fat tissue perimeters, using Solidworks® software. For each MR image, B-splines were created as contours for the anatomical structures of interest, and LOFT commands were used to interpolate between the generated Bsplines. The reconstruction of the pelvis, from MR data, was enhanced by the use of a template model generated in previous work CT images. A non-rigid registration algorithm was used to fit the pelvis template into the MR data. Additionally, MR image processing was conducted to both the left and the right sides of the model due to the intended asymmetric posture of the volunteer during the MR measurements. The presented subject-specific, three-dimensional model of the buttocks and thighs will add value to optimisation cycles in automotive seat development when used in simulating human interaction with automotive seats.
Resumo:
The Web Service Business Process Execution Language (BPEL) lacks any standard graphical notation. Various efforts have been undertaken to visualize BPEL using the Business Process Modelling Notation (BPMN). Although this is straightforward for the majority of concepts, it is tricky for the full BPEL standard, partly due to the insufficiently specified BPMN execution semantics. The upcoming BPMN 2.0 revision will provide this clear semantics. In this paper, we show how the dead path elimination (DPE) capabilities of BPEL can be expressed with this new semantics and discuss the limitations. We provide a generic formal definition of DPE and discuss resulting control flow requirements independent of specific process description languages.
Resumo:
When compared with similar joint arthroplasties, the prognosis of Total Ankle Replacement (TAR) is not satisfactory although it shows promising results post surgery. To date, most models do not provide the full anatomical functionality and biomechanical range of motion of the healthy ankle joint. This has sparked additional research and evaluation of clinical outcomes in order to enhance ankle prosthesis design. However, the limited biomechanical data that exist in literature are based upon two-dimensional, discrete and outdated techniques1 and may be inaccurate. Since accurate force estimations are crucial to prosthesis design, a paper based on a new biomechanical modeling approach, providing three dimensional forces acting on the ankle joint and the surrounding tissues was published recently, but the identified forces were suspected of being under-estimated, while muscles were . The present paper reports an attempt to improve the accuracy of the analysis by means of novel methods for kinematic processing of gait data, provided in release 4.1 of the AnyBody Modeling System (AnyBody Technology, Aalborg, Denmark) Results from the new method are shown and remaining issues are discussed.
Resumo:
In information retrieval (IR) research, more and more focus has been placed on optimizing a query language model by detecting and estimating the dependencies between the query and the observed terms occurring in the selected relevance feedback documents. In this paper, we propose a novel Aspect Language Modeling framework featuring term association acquisition, document segmentation, query decomposition, and an Aspect Model (AM) for parameter optimization. Through the proposed framework, we advance the theory and practice of applying high-order and context-sensitive term relationships to IR. We first decompose a query into subsets of query terms. Then we segment the relevance feedback documents into chunks using multiple sliding windows. Finally we discover the higher order term associations, that is, the terms in these chunks with high degree of association to the subsets of the query. In this process, we adopt an approach by combining the AM with the Association Rule (AR) mining. In our approach, the AM not only considers the subsets of a query as “hidden” states and estimates their prior distributions, but also evaluates the dependencies between the subsets of a query and the observed terms extracted from the chunks of feedback documents. The AR provides a reasonable initial estimation of the high-order term associations by discovering the associated rules from the document chunks. Experimental results on various TREC collections verify the effectiveness of our approach, which significantly outperforms a baseline language model and two state-of-the-art query language models namely the Relevance Model and the Information Flow model
Resumo:
It is a big challenge to acquire correct user profiles for personalized text classification since users may be unsure in providing their interests. Traditional approaches to user profiling adopt machine learning (ML) to automatically discover classification knowledge from explicit user feedback in describing personal interests. However, the accuracy of ML-based methods cannot be significantly improved in many cases due to the term independence assumption and uncertainties associated with them. This paper presents a novel relevance feedback approach for personalized text classification. It basically applies data mining to discover knowledge from relevant and non-relevant text and constraints specific knowledge by reasoning rules to eliminate some conflicting information. We also developed a Dempster-Shafer (DS) approach as the means to utilise the specific knowledge to build high-quality data models for classification. The experimental results conducted on Reuters Corpus Volume 1 and TREC topics support that the proposed technique achieves encouraging performance in comparing with the state-of-the-art relevance feedback models.
Resumo:
Nowadays, Opinion Mining is getting more important than before especially in doing analysis and forecasting about customers’ behavior for businesses purpose. The right decision in producing new products or services based on data about customers’ characteristics means profit for organization/company. This paper proposes a new architecture for Opinion Mining, which uses a multidimensional model to integrate customers’ characteristics and their comments about products (or services). The key step to achieve this objective is to transfer comments (opinions) to a fact table that includes several dimensions, such as, customers, products, time and locations. This research presents a comprehensive way to calculate customers’ orientation for all possible products’ attributes. A use case study is also presented in this paper to show the advantages of using OLAP and data cubes to analyze costumers’ opinions.
Resumo:
Purpose. To create a binocular statistical eye model based on previously measured ocular biometric data. Methods. Thirty-nine parameters were determined for a group of 127 healthy subjects (37 male, 90 female; 96.8% Caucasian) with an average age of 39.9 ± 12.2 years and spherical equivalent refraction of −0.98 ± 1.77 D. These parameters described the biometry of both eyes and the subjects' age. Missing parameters were complemented by data from a previously published study. After confirmation of the Gaussian shape of their distributions, these parameters were used to calculate their mean and covariance matrices. These matrices were then used to calculate a multivariate Gaussian distribution. From this, an amount of random biometric data could be generated, which were then randomly selected to create a realistic population of random eyes. Results. All parameters had Gaussian distributions, with the exception of the parameters that describe total refraction (i.e., three parameters per eye). After these non-Gaussian parameters were omitted from the model, the generated data were found to be statistically indistinguishable from the original data for the remaining 33 parameters (TOST [two one-sided t tests]; P < 0.01). Parameters derived from the generated data were also significantly indistinguishable from those calculated with the original data (P > 0.05). The only exception to this was the lens refractive index, for which the generated data had a significantly larger SD. Conclusions. A statistical eye model can describe the biometric variations found in a population and is a useful addition to the classic eye models.
Resumo:
Preservation and enhancement of transportation infrastructure is critical to continuous economic development in Australia. Of particular importance are the road assets infrastructure, due to their high costs of setting up and their social and economic impact on the national economy. Continuous availability of road assets, however, is contingent upon their effective design, condition monitoring, maintenance, and renovation and upgrading. However, in order to achieve this data exchange, integration, and interoperability is required across municipal boundaries. On the other hand, there are no agreed reference frameworks that consistently describe road infrastructure assets. As a consequence, specifications and technical solutions being chosen to manage road assets do not provide adequate detail and quality of information to support asset lifecycle management processes and decisions taken are based on perception not reality. This paper presents a road asset information model, which works as reference framework to, link other kinds of information with asset information; integrate different data suppliers; and provide a foundation for service driven integrated information framework for community infrastructure and asset management.
Resumo:
This paper summarises some of the recent studies on various types of learning approaches that have utilised some form of Web 2.0 services in curriculum design to enhance learning. A generic implementation model of this integration will then be presented to illustrate the overall learning implementation process. Recently, the integration of Web 2.0 technologies into learning curriculum has begun to get a wide acceptance among teaching instructors across various higher learning institutions. This is evidenced by numerous studies which indicate the implementation of a range of Web 2.0 technologies into their learning design to improve learning delivery. Moreover, recent studies also have shown that the ability of current students to embrace Web 2.0 technologies is better than students using existing learning technology. Despite various attempts made by teachers in relation to the integration, researchers have noted a lack of integration standard to help in curriculum design. The absence of this standard will restrict the capacity of Web 2.0 adaptation into learning and adding more the complexity to provide meaningful learning. Therefore, this paper will attempt to draw a conceptual integration model which is being generated to reflect how learning activities with some facilitation of Web 2.0 is currently being implemented. The design of this model is based on shared experiences by many scholars as well as feedback gathered from two separate surveys conducted on teachers and a group of 180 students. Furthermore, this paper also recognizes some key components that generally engage in the design of a Web 2.0 teaching and learning which need to be addressed accordingly. Overall, the content of this paper will be organised as follows. The first part of the paper will introduce the importance of Web 2.0 implementation in teaching and learning from the perspective of higher education institutions and those challenges surrounding this area. The second part summarizes related works done in this field and brings forward the concept of designing learning with the incorporation of Web 2.0 technology. The next part presents the results of analysis derived from the two student and teachers surveys on using Web 2.0 during learning activities. This paper concludes by presenting a model that reflects several key entities that may be involved during the learning design.