818 resultados para fuzzy rule base models
Resumo:
The validation of Computed Tomography (CT) based 3D models takes an integral part in studies involving 3D models of bones. This is of particular importance when such models are used for Finite Element studies. The validation of 3D models typically involves the generation of a reference model representing the bones outer surface. Several different devices have been utilised for digitising a bone’s outer surface such as mechanical 3D digitising arms, mechanical 3D contact scanners, electro-magnetic tracking devices and 3D laser scanners. However, none of these devices is capable of digitising a bone’s internal surfaces, such as the medullary canal of a long bone. Therefore, this study investigated the use of a 3D contact scanner, in conjunction with a microCT scanner, for generating a reference standard for validating the internal and external surfaces of a CT based 3D model of an ovine femur. One fresh ovine limb was scanned using a clinical CT scanner (Phillips, Brilliance 64) with a pixel size of 0.4 mm2 and slice spacing of 0.5 mm. Then the limb was dissected to obtain the soft tissue free bone while care was taken to protect the bone’s surface. A desktop mechanical 3D contact scanner (Roland DG Corporation, MDX 20, Japan) was used to digitise the surface of the denuded bone. The scanner was used with the resolution of 0.3 × 0.3 × 0.025 mm. The digitised surfaces were reconstructed into a 3D model using reverse engineering techniques in Rapidform (Inus Technology, Korea). After digitisation, the distal and proximal parts of the bone were removed such that the shaft could be scanned with a microCT (µCT40, Scanco Medical, Switzerland) scanner. The shaft, with the bone marrow removed, was immersed in water and scanned with a voxel size of 0.03 mm3. The bone contours were extracted from the image data utilising the Canny edge filter in Matlab (The Mathswork).. The extracted bone contours were reconstructed into 3D models using Amira 5.1 (Visage Imaging, Germany). The 3D models of the bone’s outer surface reconstructed from CT and microCT data were compared against the 3D model generated using the contact scanner. The 3D model of the inner canal reconstructed from the microCT data was compared against the 3D models reconstructed from the clinical CT scanner data. The disparity between the surface geometries of two models was calculated in Rapidform and recorded as average distance with standard deviation. The comparison of the 3D model of the whole bone generated from the clinical CT data with the reference model generated a mean error of 0.19±0.16 mm while the shaft was more accurate(0.08±0.06 mm) than the proximal (0.26±0.18 mm) and distal (0.22±0.16 mm) parts. The comparison between the outer 3D model generated from the microCT data and the contact scanner model generated a mean error of 0.10±0.03 mm indicating that the microCT generated models are sufficiently accurate for validation of 3D models generated from other methods. The comparison of the inner models generated from microCT data with that of clinical CT data generated an error of 0.09±0.07 mm Utilising a mechanical contact scanner in conjunction with a microCT scanner enabled to validate the outer surface of a CT based 3D model of an ovine femur as well as the surface of the model’s medullary canal.
Resumo:
Toll plazas are particularly susceptible to build-ups of vehicle-emitted pollutants because vehicles pass through in low gear. To look at this, three-dimensional computational fluid dynamics simulations of pollutant dispersion are used on the standard k e turbulence model. The effects of wind speed, wind direction and topography on pollutant dispersion were discussed. The Wuzhuang toll plaza on the Hefei-Nanjing expressway is considered, and the effects of the retaining walls along both sides of the plaza on pollutant dispersion is analysed. There are greater pollutant concentrations near the tollbooths as the angle between the direction of the wind and traffic increases implying that retaining walls impede dispersion. The slope of the walls has little influence on the variations in pollutant concentration.
Resumo:
Matching method of heavy truck-rear air suspensions is discussed, and a fuzzy control strategy which improves both ride comfort and road friendliness of truck by adjusting damping coefficients of the suspension system is found. In the first place, a Dongfeng EQ1141G7DJ heavy truck’s ten DOF whole vehicle-road model was set up based on Matlab/Simulink and vehicle dynamics. Then appropriate passive air suspensions were chosen to replace the original rear leaf springs of the truck according to truck-suspension matching criterions, consequently, the stiffness of front leaf springs were adjusted too. Then the semi-active fuzzy controllers were designed for further enhancement of the truck’s ride comfort and the road friendliness. After the application of semi-active fuzzy control strategy through simulation, is was indicated that both ride comfort and road friendliness could be enhanced effectively under various road conditions. The strategy proposed may provide theory basis for design and development of truck suspension system in China.
Resumo:
Two studies were conducted to investigate empirical support for two models relating to the development of self-concepts and self-esteem in upper-primary school children. The first study investigated the social learning model by examining the relationship between mothers' and fathers' self-reported self-concepts and self-esteem and the self-reported self-concepts and self-esteem of their children. The second study investigated the symbolic interaction model by examining the relationship between children's perception of the frequency of positive and negative statements made by parents and their self-reported self-concepts and self-esteem. The results of these studies suggested that what parents say to their children and how they interact with them is more closely related to their children's self-perceptions than the role of modelling parental attitudes and behaviours. The findings highlight the benefits of parents talking positively to their children.
Resumo:
With increasingly complex engineering assets and tight economic requirements, asset reliability becomes more crucial in Engineering Asset Management (EAM). Improving the reliability of systems has always been a major aim of EAM. Reliability assessment using degradation data has become a significant approach to evaluate the reliability and safety of critical systems. Degradation data often provide more information than failure time data for assessing reliability and predicting the remnant life of systems. In general, degradation is the reduction in performance, reliability, and life span of assets. Many failure mechanisms can be traced to an underlying degradation process. Degradation phenomenon is a kind of stochastic process; therefore, it could be modelled in several approaches. Degradation modelling techniques have generated a great amount of research in reliability field. While degradation models play a significant role in reliability analysis, there are few review papers on that. This paper presents a review of the existing literature on commonly used degradation models in reliability analysis. The current research and developments in degradation models are reviewed and summarised in this paper. This study synthesises these models and classifies them in certain groups. Additionally, it attempts to identify the merits, limitations, and applications of each model. It provides potential applications of these degradation models in asset health and reliability prediction.
Resumo:
Modern Engineering Asset Management (EAM) requires the accurate assessment of current and the prediction of future asset health condition. Suitable mathematical models that are capable of predicting Time-to-Failure (TTF) and the probability of failure in future time are essential. In traditional reliability models, the lifetime of assets is estimated using failure time data. However, in most real-life situations and industry applications, the lifetime of assets is influenced by different risk factors, which are called covariates. The fundamental notion in reliability theory is the failure time of a system and its covariates. These covariates change stochastically and may influence and/or indicate the failure time. Research shows that many statistical models have been developed to estimate the hazard of assets or individuals with covariates. An extensive amount of literature on hazard models with covariates (also termed covariate models), including theory and practical applications, has emerged. This paper is a state-of-the-art review of the existing literature on these covariate models in both the reliability and biomedical fields. One of the major purposes of this expository paper is to synthesise these models from both industrial reliability and biomedical fields and then contextually group them into non-parametric and semi-parametric models. Comments on their merits and limitations are also presented. Another main purpose of this paper is to comprehensively review and summarise the current research on the development of the covariate models so as to facilitate the application of more covariate modelling techniques into prognostics and asset health management.
Resumo:
This research examines how men react to male models in print advertisements. In two experiments, we show that the gender identity of men influences their responses to advertisements featuring a masculine, feminine, or androgynous male model. In addition, we explore the extent to which men feel they will be classified by others as similar to the model as a mechanism for these effects. Specifically, masculine men respond most favorably to masculine models and are negative toward feminine models. In contrast, feminine men prefer feminine models when their private self is salient. Yet in a collective context, they prefer masculine models.These experiments shed light on how gender identity and self-construal influence male evaluations and illustrate the social pressure on men to endorse traditional masculine portrayals. We also present implications for advertising practice.
Resumo:
In two experiments, we show that the beliefs women have about the controllability of their weight (i.e., weight locus of control) influences their responses to advertisements featuring a larger-sized female model or a slim female model. Further, we examine self-referencing as a mechanism for these effects. Specifically, people who believe they can control their weight (“internals”), respond most favorably to slim models in advertising, and this favorable response is mediated by self-referencing. In contrast, people who feel powerless about their weight (“externals”), self-reference larger-sized models, but only prefer larger-sized models when the advertisement is for a non-fattening product. For fattening products, they exhibit a similar preference for larger-sized models and slim models. Together, these experiments shed light on the effect of model body size and the role of weight locus of control in influencing consumer attitudes.
Resumo:
A configurable process model provides a consolidated view of a family of business processes. It promotes the reuse of proven practices by providing analysts with a generic modelling artifact from which to derive individual process models. Unfortunately, the scope of existing notations for configurable process modelling is restricted, thus hindering their applicability. Specifically, these notations focus on capturing tasks and control-flow dependencies, neglecting equally important ingredients of business processes such as data and resources. This research fills this gap by proposing a configurable process modelling notation incorporating features for capturing resources, data and physical objects involved in the performance of tasks. The proposal has been implemented in a toolset that assists analysts during the configuration phase and guarantees the correctness of the resulting process models. The approach has been validated by means of a case study from the film industry.
Resumo:
The fracture healing process is modulated by the mechanical environment created by imposed loads and motion between the bone fragments. Contact between the fragments obviously results in a significantly different stress and strain environment to a uniform fracture gap containing only soft tissue (e.g. haematoma). The assumption of the latter in existing computational models of the healing process will hence exaggerate the inter-fragmentary strain in many clinically-relevant cases. To address this issue, we introduce the concept of a contact zone that represents a variable degree of contact between cortices by the relative proportions of bone and soft tissue present. This is introduced as an initial condition in a two-dimensional iterative finite element model of a healing tibial fracture, in which material properties are defined by the volume fractions of each tissue present. The algorithm governing the formation of cartilage and bone in the fracture callus uses fuzzy logic rules based on strain energy density resulting from axial compression. The model predicts that increasing the degree of initial bone contact reduces the amount of callus formed (periosteal callus thickness 3.1mm without contact, down to 0.5mm with 10% bone in contact zone). This is consistent with the greater effective stiffness in the contact zone and hence, a smaller inter-fragmentary strain. These results demonstrate that the contact zone strategy reasonably simulates the differences in the healing sequence resulting from the closeness of reduction.
Resumo:
Advertising research has generally not gone beyond offering support for a positive effect where ethnic models in advertising are viewed by consumers of the same ethnicity. This study offers an explanation behind this phenomenon that can be useful to marketers using self-reference theory. Our experiment reveals a strong self-referencing effect for ethnic minority individuals. Specifically, Asian subjects (the ethnic minority group) self-referenced ads with Asian models more than white subjects (the ethnic majority group). However, this result was not evident for white subjects. Implications for academics and advertisers are discussed.
Resumo:
Since the 1980s, industries and researchers have sought to better understand the quality of services due to the rise in their importance (Brogowicz, Delene and Lyth 1990). More recent developments with online services, coupled with growing recognition of service quality (SQ) as a key contributor to national economies and as an increasingly important competitive differentiator, amplify the need to revisit our understanding of SQ and its measurement. Although ‘SQ’ can be broadly defined as “a global overarching judgment or attitude relating to the overall excellence or superiority of a service” (Parasuraman, Berry and Zeithaml 1988), the term has many interpretations. There has been considerable progress on how to measure SQ perceptions, but little consensus has been achieved on what should be measured. There is agreement that SQ is multi-dimensional, but little agreement as to the nature or content of these dimensions (Brady and Cronin 2001). For example, within the banking sector, there exist multiple SQ models, each consisting of varying dimensions. The existence of multiple conceptions and the lack of a unifying theory bring the credibility of existing conceptions into question, and beg the question of whether it is possible at some higher level to define SQ broadly such that it spans all service types and industries. This research aims to explore the viability of a universal conception of SQ, primarily through a careful re-visitation of the services and SQ literature. The study analyses the strengths and weaknesses of the highly regarded and widely used global SQ model (SERVQUAL) which reflects a single-level approach to SQ measurement. The SERVQUAL model states that customers evaluate SQ (of each service encounter) based on five dimensions namely reliability, assurance, tangibles, empathy and responsibility. SERVQUAL, however, failed to address what needs to be reliable, assured, tangible, empathetic and responsible. This research also addresses a more recent global SQ model from Brady and Cronin (2001); the B&C (2001) model, that has potential to be the successor of SERVQUAL in that it encompasses other global SQ models and addresses the ‘what’ questions that SERVQUAL didn’t. The B&C (2001) model conceives SQ as being multidimensional and multi-level; this hierarchical approach to SQ measurement better reflecting human perceptions. In-line with the initial intention of SERVQUAL, which was developed to be generalizable across industries and service types, this research aims to develop a conceptual understanding of SQ, via literature and reflection, that encompasses the content/nature of factors related to SQ; and addresses the benefits and weaknesses of various SQ measurement approaches (i.e. disconfirmation versus perceptions-only). Such understanding of SQ seeks to transcend industries and service types with the intention of extending our knowledge of SQ and assisting practitioners in understanding and evaluating SQ. The candidate’s research has been conducted within, and seeks to contribute to, the ‘IS-Impact’ research track of the IT Professional Services (ITPS) Research Program at QUT. The vision of the track is “to develop the most widely employed model for benchmarking Information Systems in organizations for the joint benefit of research and practice.” The ‘IS-Impact’ research track has developed an Information Systems (IS) success measurement model, the IS-Impact Model (Gable, Sedera and Chan 2008), which seeks to fulfill the track’s vision. Results of this study will help future researchers in the ‘IS-Impact’ research track address questions such as: • Is SQ an antecedent or consequence of the IS-Impact model or both? • Has SQ already been addressed by existing measures of the IS-Impact model? • Is SQ a separate, new dimension of the IS-Impact model? • Is SQ an alternative conception of the IS? Results from the candidate’s research suggest that SQ dimensions can be classified at a higher level which is encompassed by the B&C (2001) model’s 3 primary dimensions (interaction, physical environment and outcome). The candidate also notes that it might be viable to re-word the ‘physical environment quality’ primary dimension to ‘environment quality’ so as to better encompass both physical and virtual scenarios (E.g: web sites). The candidate does not rule out the global feasibility of the B&C (2001) model’s nine sub-dimensions, however, acknowledges that more work has to be done to better define the sub-dimensions. The candidate observes that the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions are supportive representations of the ‘interaction’, physical environment’ and ‘outcome’ primary dimensions respectively. The latter statement suggests that customers evaluate each primary dimension (or each higher level of SQ classification) namely ‘interaction’, physical environment’ and ‘outcome’ based on the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions respectively. The ability to classify SQ dimensions at a higher level coupled with support for the measures that make up this higher level, leads the candidate to propose the B&C (2001) model as a unifying theory that acts as a starting point to measuring SQ and the SQ of IS. The candidate also notes, in parallel with the continuing validation and generalization of the IS-Impact model, that there is value in alternatively conceptualizing the IS as a ‘service’ and ultimately triangulating measures of IS SQ with the IS-Impact model. These further efforts are beyond the scope of the candidate’s study. Results from the candidate’s research also suggest that both the disconfirmation and perceptions-only approaches have their merits and the choice of approach would depend on the objective(s) of the study. Should the objective(s) be an overall evaluation of SQ, the perceptions-only approached is more appropriate as this approach is more straightforward and reduces administrative overheads in the process. However, should the objective(s) be to identify SQ gaps (shortfalls), the (measured) disconfirmation approach is more appropriate as this approach has the ability to identify areas that need improvement.
Resumo:
Australia needs highly skilled workers to sustain a healthy economy. Current employment-based training models have limitations in meeting the demands for highly skilled labour supply. The research explored current and emerging models of employment-based training to propose more effective models at higher VET qualifications that can maintain a balance between institution and work-based learning.
Resumo:
Cognitive modelling of phenomena in clinical practice allows the operationalisation of otherwise diffuse descriptive terms such as craving or flashbacks. This supports the empirical investigation of the clinical phenomena and the development of targeted treatment interventions. This paper focuses on the cognitive processes underpinning craving, which is recognised as a motivating experience in substance dependence. We use a high-level cognitive architecture, Interacting Cognitive Subsystems (ICS), to compare two theories of craving: Tiffany's theory, centred on the control of automated action schemata, and our own Elaborated Intrusion theory of craving. Data from a questionnaire study of the subjective aspects of everyday desires experienced by a large non-clinical population are presented. Both the data and the high-level modelling support the central claim of the Elaborated Intrusion theory that imagery is a key element of craving, providing the subjective experience and mediating much of the associated disruption of concurrent cognition.
Resumo:
With the widespread applications of electronic learning (e-Learning) technologies to education at all levels, increasing number of online educational resources and messages are generated from the corresponding e-Learning environments. Nevertheless, it is quite difficult, if not totally impossible, for instructors to read through and analyze the online messages to predict the progress of their students on the fly. The main contribution of this paper is the illustration of a novel concept map generation mechanism which is underpinned by a fuzzy domain ontology extraction algorithm. The proposed mechanism can automatically construct concept maps based on the messages posted to online discussion forums. By browsing the concept maps, instructors can quickly identify the progress of their students and adjust the pedagogical sequence on the fly. Our initial experimental results reveal that the accuracy and the quality of the automatically generated concept maps are promising. Our research work opens the door to the development and application of intelligent software tools to enhance e-Learning.