805 resultados para multinomial logit model


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In gait analysis, both shoe mounted and skin mounted markers have been used to quantify the movement of the foot inside the shoe. However, these models have not been demonstrated as reliable or accurate in shod conditions. The purpose of this study was to develop an accurate and reliable marker set to describe foot-shoe complex kinematics during stance phase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Finite element analyses of the human body in seated postures requires digital models capable of providing accurate and precise prediction of the tissue-level response of the body in the seated posture. To achieve such models, the human anatomy must be represented with high fidelity. This information can readily be defined using medical imaging techniques such as Magnetic Resonance Imaging (MRI) or Computed Tomography (CT). Current practices for constructing digital human models, based on the magnetic resonance (MR) images, in a lying down (supine) posture have reduced the error in the geometric representation of human anatomy relative to reconstructions based on data from cadaveric studies. Nonetheless, the significant differences between seated and supine postures in segment orientation, soft-tissue deformation and soft tissue strain create a need for data obtained in postures more similar to the application posture. In this study, we present a novel method for creating digital human models based on seated MR data. An adult-male volunteer was scanned in a simulated driving posture using a FONAR 0.6T upright MRI scanner with a T1 scanning protocol. To compensate for unavoidable image distortion near the edges of the study, images of the same anatomical structures were obtained in transverse and sagittal planes. Combinations of transverse and sagittal images were used to reconstruct the major anatomical features from the buttocks through the knees, including bone, muscle and fat tissue perimeters, using Solidworks® software. For each MR image, B-splines were created as contours for the anatomical structures of interest, and LOFT commands were used to interpolate between the generated Bsplines. The reconstruction of the pelvis, from MR data, was enhanced by the use of a template model generated in previous work CT images. A non-rigid registration algorithm was used to fit the pelvis template into the MR data. Additionally, MR image processing was conducted to both the left and the right sides of the model due to the intended asymmetric posture of the volunteer during the MR measurements. The presented subject-specific, three-dimensional model of the buttocks and thighs will add value to optimisation cycles in automotive seat development when used in simulating human interaction with automotive seats.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When compared with similar joint arthroplasties, the prognosis of Total Ankle Replacement (TAR) is not satisfactory although it shows promising results post surgery. To date, most models do not provide the full anatomical functionality and biomechanical range of motion of the healthy ankle joint. This has sparked additional research and evaluation of clinical outcomes in order to enhance ankle prosthesis design. However, the limited biomechanical data that exist in literature are based upon two-dimensional, discrete and outdated techniques1 and may be inaccurate. Since accurate force estimations are crucial to prosthesis design, a paper based on a new biomechanical modeling approach, providing three dimensional forces acting on the ankle joint and the surrounding tissues was published recently, but the identified forces were suspected of being under-estimated, while muscles were . The present paper reports an attempt to improve the accuracy of the analysis by means of novel methods for kinematic processing of gait data, provided in release 4.1 of the AnyBody Modeling System (AnyBody Technology, Aalborg, Denmark) Results from the new method are shown and remaining issues are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In information retrieval (IR) research, more and more focus has been placed on optimizing a query language model by detecting and estimating the dependencies between the query and the observed terms occurring in the selected relevance feedback documents. In this paper, we propose a novel Aspect Language Modeling framework featuring term association acquisition, document segmentation, query decomposition, and an Aspect Model (AM) for parameter optimization. Through the proposed framework, we advance the theory and practice of applying high-order and context-sensitive term relationships to IR. We first decompose a query into subsets of query terms. Then we segment the relevance feedback documents into chunks using multiple sliding windows. Finally we discover the higher order term associations, that is, the terms in these chunks with high degree of association to the subsets of the query. In this process, we adopt an approach by combining the AM with the Association Rule (AR) mining. In our approach, the AM not only considers the subsets of a query as “hidden” states and estimates their prior distributions, but also evaluates the dependencies between the subsets of a query and the observed terms extracted from the chunks of feedback documents. The AR provides a reasonable initial estimation of the high-order term associations by discovering the associated rules from the document chunks. Experimental results on various TREC collections verify the effectiveness of our approach, which significantly outperforms a baseline language model and two state-of-the-art query language models namely the Relevance Model and the Information Flow model

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is a big challenge to acquire correct user profiles for personalized text classification since users may be unsure in providing their interests. Traditional approaches to user profiling adopt machine learning (ML) to automatically discover classification knowledge from explicit user feedback in describing personal interests. However, the accuracy of ML-based methods cannot be significantly improved in many cases due to the term independence assumption and uncertainties associated with them. This paper presents a novel relevance feedback approach for personalized text classification. It basically applies data mining to discover knowledge from relevant and non-relevant text and constraints specific knowledge by reasoning rules to eliminate some conflicting information. We also developed a Dempster-Shafer (DS) approach as the means to utilise the specific knowledge to build high-quality data models for classification. The experimental results conducted on Reuters Corpus Volume 1 and TREC topics support that the proposed technique achieves encouraging performance in comparing with the state-of-the-art relevance feedback models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, Opinion Mining is getting more important than before especially in doing analysis and forecasting about customers’ behavior for businesses purpose. The right decision in producing new products or services based on data about customers’ characteristics means profit for organization/company. This paper proposes a new architecture for Opinion Mining, which uses a multidimensional model to integrate customers’ characteristics and their comments about products (or services). The key step to achieve this objective is to transfer comments (opinions) to a fact table that includes several dimensions, such as, customers, products, time and locations. This research presents a comprehensive way to calculate customers’ orientation for all possible products’ attributes. A use case study is also presented in this paper to show the advantages of using OLAP and data cubes to analyze costumers’ opinions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose. To create a binocular statistical eye model based on previously measured ocular biometric data. Methods. Thirty-nine parameters were determined for a group of 127 healthy subjects (37 male, 90 female; 96.8% Caucasian) with an average age of 39.9 ± 12.2 years and spherical equivalent refraction of −0.98 ± 1.77 D. These parameters described the biometry of both eyes and the subjects' age. Missing parameters were complemented by data from a previously published study. After confirmation of the Gaussian shape of their distributions, these parameters were used to calculate their mean and covariance matrices. These matrices were then used to calculate a multivariate Gaussian distribution. From this, an amount of random biometric data could be generated, which were then randomly selected to create a realistic population of random eyes. Results. All parameters had Gaussian distributions, with the exception of the parameters that describe total refraction (i.e., three parameters per eye). After these non-Gaussian parameters were omitted from the model, the generated data were found to be statistically indistinguishable from the original data for the remaining 33 parameters (TOST [two one-sided t tests]; P < 0.01). Parameters derived from the generated data were also significantly indistinguishable from those calculated with the original data (P > 0.05). The only exception to this was the lens refractive index, for which the generated data had a significantly larger SD. Conclusions. A statistical eye model can describe the biometric variations found in a population and is a useful addition to the classic eye models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Preservation and enhancement of transportation infrastructure is critical to continuous economic development in Australia. Of particular importance are the road assets infrastructure, due to their high costs of setting up and their social and economic impact on the national economy. Continuous availability of road assets, however, is contingent upon their effective design, condition monitoring, maintenance, and renovation and upgrading. However, in order to achieve this data exchange, integration, and interoperability is required across municipal boundaries. On the other hand, there are no agreed reference frameworks that consistently describe road infrastructure assets. As a consequence, specifications and technical solutions being chosen to manage road assets do not provide adequate detail and quality of information to support asset lifecycle management processes and decisions taken are based on perception not reality. This paper presents a road asset information model, which works as reference framework to, link other kinds of information with asset information; integrate different data suppliers; and provide a foundation for service driven integrated information framework for community infrastructure and asset management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper summarises some of the recent studies on various types of learning approaches that have utilised some form of Web 2.0 services in curriculum design to enhance learning. A generic implementation model of this integration will then be presented to illustrate the overall learning implementation process. Recently, the integration of Web 2.0 technologies into learning curriculum has begun to get a wide acceptance among teaching instructors across various higher learning institutions. This is evidenced by numerous studies which indicate the implementation of a range of Web 2.0 technologies into their learning design to improve learning delivery. Moreover, recent studies also have shown that the ability of current students to embrace Web 2.0 technologies is better than students using existing learning technology. Despite various attempts made by teachers in relation to the integration, researchers have noted a lack of integration standard to help in curriculum design. The absence of this standard will restrict the capacity of Web 2.0 adaptation into learning and adding more the complexity to provide meaningful learning. Therefore, this paper will attempt to draw a conceptual integration model which is being generated to reflect how learning activities with some facilitation of Web 2.0 is currently being implemented. The design of this model is based on shared experiences by many scholars as well as feedback gathered from two separate surveys conducted on teachers and a group of 180 students. Furthermore, this paper also recognizes some key components that generally engage in the design of a Web 2.0 teaching and learning which need to be addressed accordingly. Overall, the content of this paper will be organised as follows. The first part of the paper will introduce the importance of Web 2.0 implementation in teaching and learning from the perspective of higher education institutions and those challenges surrounding this area. The second part summarizes related works done in this field and brings forward the concept of designing learning with the incorporation of Web 2.0 technology. The next part presents the results of analysis derived from the two student and teachers surveys on using Web 2.0 during learning activities. This paper concludes by presenting a model that reflects several key entities that may be involved during the learning design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As organizations reach higher levels of business process management maturity, they often find themselves maintaining very large process model repositories, representing valuable knowledge about their operations. A common practice within these repositories is to create new process models, or extend existing ones, by copying and merging fragments from other models. We contend that if these duplicate fragments, a.k.a. ex- act clones, can be identified and factored out as shared subprocesses, the repository’s maintainability can be greatly improved. With this purpose in mind, we propose an indexing structure to support fast detection of clones in process model repositories. Moreover, we show how this index can be used to efficiently query a process model repository for fragments. This index, called RPSDAG, is based on a novel combination of a method for process model decomposition (namely the Refined Process Structure Tree), with established graph canonization and string matching techniques. We evaluated the RPSDAG with large process model repositories from industrial practice. The experiments show that a significant number of non-trivial clones can be efficiently found in such repositories, and that fragment queries can be handled efficiently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to make good decisions about the design of information systems, an essential skill is to understand process models of the business domain the system is intended to support. Yet, little knowledge to date has been established about the factors that affect how model users comprehend the content of process models. In this study, we use theories of semiotics and cognitive load to theorize how model and personal factors influence how model viewers comprehend the syntactical information of process models. We then report on a four-part series of experiments, in which we examined these factors. Our results show that additional semantical information impedes syntax comprehension, and that theoretical knowledge eases syntax comprehension. Modeling experience further contributes positively to comprehension efficiency, measured as the ratio of correct answers to the time taken to provide answers. We discuss implications for practice and research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter proposes a conceptual model for optimal development of needed capabilities for the contemporary knowledge economy. We commence by outlining key capability requirements of the 21st century knowledge economy, distinguishing these from those suited to the earlier stages of the knowledge economy. We then discuss the extent to which higher education currently caters to these requirements and then put forward a new model for effective knowledge economy capability learning. The core of this model is the development of an adaptive and adaptable career identity, which is created through a reflective process of career self-management, drawing upon data from the self and the world of work. In turn, career identity drives the individual’s process of skill and knowledge acquisition, including deep disciplinary knowledge. The professional capability learning thus acquired includes disciplinary skill and knowledge sets, generic skills, and also skills for the knowledge economy, including disciplinary agility, social network capability, and enterprise skills. In the final part of this chapter, we envision higher education systems that embrace the model, and suggest steps that could be taken toward making the development of knowledge economy capabilities an integral part of the university experience.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A key issue in the economic development and performance of organizations is the existence of standards. Their definition and control are sources of power and it is important to understand their concept, as it gives standards their direction and their legitimacy, and to explore how they are represented and applied. The difficulties posed by classical micro-economics in establishing a theory of standardization that is compatible with its fundamental axiomatic are acknowledged. We propose to reconsider the problem by taking the opposite perspective in questioning its theoretical base and by reformulating assumptions about the independent and autonomous decisions taken by actors. The Theory of Conventions will offer us a theoretical framework and tools enabling us to understand the systemic dimension and dynamic structure of standards. These will be seen as a special case of conventions. This work aims to provide a sound basis and promote a better consciousness in the development of global project management standards. It aims also to emphasize that social construction is not a matter of copyright but a matter of open minds, collective cognitive process and freedom for the common wealth.