872 resultados para Systems engineering and theory
Resumo:
Chain in both its forms - common (or stud-less) and stud-link - has many engineering applications. It is widely used as a component in the moorings of offshore floating systems, where its ruggedness and corrosion resistance make it an attractive choice. Chain exhibits some interesting behaviour in that when straight and subject to an axial load it does not twist or generate any torque, but if twisted or loaded when in a twisted condition it behaves in a highly non-linear manner, with the torque dependent upon the level of twist and axial load. Clearly an understanding of the way in which chains may behave and interact with other mooring components (such as wire rope, which also exhibits coupling between axial load and generated torque) when they are in service is essential. However, the sizes of chain that are in use in offshore moorings (typical bar diameters are 75 mm and greater) are too large to allow easy testing. This paper, which is in two parts, aims to address the issues and considerations relevant to torque in mooring chain. The first part introduces a frictionless theory that predicts the resultant torques and 'lift' in the links as non-dimensionalized functions of the angle of twist. Fortran code is presented in an Appendix, which allows the reader to make use of the analysis. The second part of the paper presents results from experimental work on both stud-less (41 mm) and stud-link (20.5 and 56 mm) chains. Torsional data are presented in both 'constant twist' and 'constant load' forms, as well as considering the lift between the links.
Resumo:
This book provides the latest in a series of books growing out of the International Joint Conferences on Computer, Information and Systems Sciences and Engineering. It includes chapters in the most advanced areas of Computing, Informatics, Systems Sciences and Engineering. It has accessible to a wide range of readership, including professors, researchers, practitioners and students. This book includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Informatics, and Systems Sciences, and Engineering. It includes selected papers form the conference proceedings of the Ninth International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2013). Coverage includes topics in: Industrial Electronics, Technology & Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.
Resumo:
Science has been developed from the rational-empirical methods, having as a consequence, the representation of existing phenomena without understanding the root causes. The question which currently has is the sense of the being, and in a simplified way, one can say that the dogmatic religion lead to misinterpretations, the empirical sciences contain the exact rational representations of phenomena. Thus, Science has been able to get rid of the dogmatic religion. The project for the sciences of being looks to return to reality its essential foundations; under the plan of theory of systems necessarily involves a search for the meaning of Reality.
Resumo:
Convergence of technologies in the Internet and the field of expert systems have offered new ways of sharing and distributing knowledge. However, there has been a general lack of research in the area of web-based expert systems (ES). This paper addresses the issues associated with the design, development, and use of web-based ES from a standpoint of the benefits and challenges of developing and using them. The original theory and concepts in conventional ES were reviewed and a knowledge engineering framework for developing them was revisited. The study considered three web-based ES: WITS-advisor - for e-business strategy development, Fish-Expert - for fish disease diagnosis, and IMIS - to promote intelligent interviews. The benefits and challenges in developing and using ES are discussed by comparing them with traditional standalone systems from development and application perspectives. © 2004 Elsevier B.V. All rights reserved.
Resumo:
This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.
Resumo:
This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.
Resumo:
The anatomy and microstructure of the spine and in particular the intervertebral disc are intimately linked to how they operate in vivo and how they distribute loads to the adjacent musculature and bony anatomy. The degeneration of the intervertebral discs may be characterised by a loss of hydration, loss of disc height, a granular texture and the presence of annular lesions. As such, degeneration of the intervertebral discs compromises the mechanical integrity of their components and results in adaption and modification in the mechanical means by which loads are distributed between adjacent spinal motion segments.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
The paper details the results of the first phase of an on-going research into the sociocultural factors that influence the supervision of higher degrees research (HDR) engineering students in the Faculty of Built Environment and Engineering (BEE) and Faculty of Science and Technology (FaST) at Queensland University of Technology. A quantitative analysis was performed on the results from an online survey that was administered to 179 engineering students. The study reveals that cultural barriers impact their progression and developing confidence in their research programs. We argue that in order to assist international and non-English speaking background (NESB) research students to triumph over such culturally embedded challenges in engineering research, it is important for supervisors to understand this cohort's unique pedagogical needs and develop intercultural sensitivity in their pedagogical practice in postgraduate research supervision. To facilitate this, the governing body (Office of Research) can play a vital role in not only creating the required support structures but also their uniform implementation across the board.
Resumo:
Auto rickshaws (3-wheelers) are the most sought after transport among the urban and rural poor in India. The assembly of the vehicle involves assemblies of several major components. The L-angle is the component that connects the front panel with the vehicle floor. Current L-angle part has been observed to experience permanent deformation failure over period of time. This paper studies the effect of the addition of stiffeners on the L-angle to increase the strength of the component. A physical model of the L-angle was reversed engineered and modelled in CAD before static loading analysis were carried out on the model using finite element analysis. The modified L-angle fitted with stiffeners was shown to be able to withstand more load compare to previous design.
Resumo:
Item folksonomy or tag information is popularly available on the web now. However, since tags are arbitrary words given by users, they contain a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise brings difficulties to improve the accuracy of item recommendations. In this paper, we propose to combine item taxonomy and folksonomy to reduce the noise of tags and make personalized item recommendations. The experiments conducted on the dataset collected from Amazon.com demonstrated the effectiveness of the proposed approaches. The results suggested that the recommendation accuracy can be further improved if we consider the viewpoints and the vocabularies of both experts and users.
Resumo:
Two decades after its inception, Latent Semantic Analysis(LSA) has become part and parcel of every modern introduction to Information Retrieval. For any tool that matures so quickly, it is important to check its lore and limitations, or else stagnation will set in. We focus here on the three main aspects of LSA that are well accepted, and the gist of which can be summarized as follows: (1) that LSA recovers latent semantic factors underlying the document space, (2) that such can be accomplished through lossy compression of the document space by eliminating lexical noise, and (3) that the latter can best be achieved by Singular Value Decomposition. For each aspect we performed experiments analogous to those reported in the LSA literature and compared the evidence brought to bear in each case. On the negative side, we show that the above claims about LSA are much more limited than commonly believed. Even a simple example may show that LSA does not recover the optimal semantic factors as intended in the pedagogical example used in many LSA publications. Additionally, and remarkably deviating from LSA lore, LSA does not scale up well: the larger the document space, the more unlikely that LSA recovers an optimal set of semantic factors. On the positive side, we describe new algorithms to replace LSA (and more recent alternatives as pLSA, LDA, and kernel methods) by trading its l2 space for an l1 space, thereby guaranteeing an optimal set of semantic factors. These algorithms seem to salvage the spirit of LSA as we think it was initially conceived.