793 resultados para conformational analysis
Resumo:
Industrial employment growth has been one of the most dynamic areas of expansion in Asia; however, current trends in industrialised working environments have resulted in greater employee stress. Despite research showing that cultural values affect the way people cope with stress, there is a dearth of psychometrically established tools for use in non-Western countries to measure these constructs. Studies of the "Way of Coping Checklist-Revised" (WCCL-R) in the West suggest that the WCCL-R has good psychometric properties, but its applicability in the East is still understudied. A confirmatory factor analysis (CFA) is used to validate the WCCL-R constructs in an Asian population. This study used 1,314 participants from Indonesia, Sri Lanka, Singapore, and Thailand. An initial exploratory factor analysis revealed that original structures were not confirmed; however, a subsequent EFA and CFA showed that a 38-item, five-factor structure model was confirmed. The revised WCCL-R in the Asian sample was also found to have good reliability and sound construct and concurrent validity. The 38-item structure of the WCCL-R has considerable potential in future occupational stress-related research in Asian countries.
Resumo:
The role that heparanase plays during metastasis and angiogenesis in tumors makes it an attractive target for cancer therapeutics. Despite this enzyme’s significance, most of the assays developed to measure its activity are complex. Moreover, they usually rely on labeling variable preparations of the natural substrate heparan sulfate, making comparisons across studies precarious. To overcome these problems, we have developed a convenient assay based on the cleavage of the synthetic heparin oligosaccharide fondaparinux. The assay measures the appearance of the disaccharide product of heparanase-catalyzed fondaparinux cleavage colorimetrically using the tetrazolium salt WST-1. Because this assay has a homogeneous substrate with a single point of cleavage, the kinetics of the enzyme can be reliably characterized, giving a Km of 46 μM and a kcat of 3.5 s−1 with fondaparinux as substrate. The inhibition of heparanase by the published inhibitor, PI-88, was also studied, and a Ki of 7.9 nM was determined. The simplicity and robustness of this method, should, not only greatly assist routine assay of heparanase activity but also could be adapted for high-throughput screening of compound libraries, with the data generated being directly comparable across studies.
Resumo:
BACKGROUND: Literature and clinical experience suggest that some people experience atypical, complicated or pathological bereavement reactions in response to a major loss. METHOD: Three groups of community-based bereaved subjects--spouses (n = 44), adult children (n = 40), and parents (n = 36)--were followed up four times in the 13 months after a loss. A 17-item scale of core bereavement times was developed and used to investigate the intensity of the bereavement response over time. RESULTS: Cluster analysis revealed a pattern of bereavement-related symptoms approximating a syndrome of chronic grief in 11 (9.2%) of the 120 subjects. None of the respondents displayed a pattern consistent with delayed or absent grief. CONCLUSIONS: In a non-clinical community sample of bereaved people, delayed or absent grief is infrequently seen, unlike chronic grief, which is demonstrated in a minority.
Resumo:
With increasingly complex engineering assets and tight economic requirements, asset reliability becomes more crucial in Engineering Asset Management (EAM). Improving the reliability of systems has always been a major aim of EAM. Reliability assessment using degradation data has become a significant approach to evaluate the reliability and safety of critical systems. Degradation data often provide more information than failure time data for assessing reliability and predicting the remnant life of systems. In general, degradation is the reduction in performance, reliability, and life span of assets. Many failure mechanisms can be traced to an underlying degradation process. Degradation phenomenon is a kind of stochastic process; therefore, it could be modelled in several approaches. Degradation modelling techniques have generated a great amount of research in reliability field. While degradation models play a significant role in reliability analysis, there are few review papers on that. This paper presents a review of the existing literature on commonly used degradation models in reliability analysis. The current research and developments in degradation models are reviewed and summarised in this paper. This study synthesises these models and classifies them in certain groups. Additionally, it attempts to identify the merits, limitations, and applications of each model. It provides potential applications of these degradation models in asset health and reliability prediction.
Resumo:
This report presents the current state and approach in Building Information Modelling (BIM). The report is focussed at providing a desktop audit of the current state and capabilities of the products and applications supporting BIM. This includes discussion on BIM model servers as well as discipline specific applications, for which the distinction is explained below. The report presented here is aimed at giving a broad overview of the tools and applications with respect to their BIM capabilities and in no way claims to be an exhaustive report for individual tools. Chapter 4 of the report includes the research and development agendas pertaining to the BIM approach based on the observations and analysis from the desktop audit.
Resumo:
The generic IS-success constructs first identified by DeLone and McLean (1992) continue to be widely employed in research. Yet, recent work by Petter et al (2007) has cast doubt on the validity of many mainstream constructs employed in IS research over the past 3 decades; critiquing the almost universal conceptualization and validation of these constructs as reflective when in many studies the measures appear to have been implicitly operationalized as formative. Cited examples of proper specification of the Delone and McLean constructs are few, particularly in light of their extensive employment in IS research. This paper introduces a four-stage formative construct development framework: Conceive > Operationalize > Respond > Validate (CORV). Employing the CORV framework in an archival analysis of research published in top outlets 1985-2007, the paper explores the extent of possible problems with past IS research due to potential misspecification of the four application-related success dimensions: Individual-Impact, Organizational-Impact, System-Quality and Information-Quality. Results suggest major concerns where there is a mismatch of the Respond and Validate stages. A general dearth of attention to the Operationalize and Respond stages in methodological writings is also observed.
Resumo:
Isolating the impact of a colour, or a combination of colours, is extremely difficult to achieve because it is difficult to remove other environmental elements such as sound, odours, light, and occasion from the experience of being in a place. In order to ascertain the impact of colour on how we interpret the world in day to day situations, the current study records participant responses to achromatic scenes of the built environment prior to viewing the same scene in colour. A number of environments were photographed in colour or copied from design books; and copies of the images saved as both colour and black/grey/white. An overview of the study will be introduced by firstly providing examples of studies which have linked colour to meaning and emotions. For example, yellow is said to be connected to happiness1 ; or red evokes feelings of anger2 or passion. A link between colour and the way we understand and/or feel is established however, there is a further need for knowledge of colour in context. In response to this need, the current achromatic/chromatic environmental study will be described and discussed in light of the findings. Finally, suggestions for future research are posed. Based on previous research the authors hypothesised that a shift in environmental perception by participants would occur. It was found that the impact of colour includes a shift in perception of aspects such as its atmosphere and youthfulness. Through studio-class discussions it was also noted that the predicted age of the place, the function, and in association, the potential users when colour was added (or deleted) were often challenged. It is posited that the ability of a designer (for example, interior designer, architect, or landscape architect) to design for a particular target group—user and/or clients will be enhanced through more targeted studies relating colour in situ. The importance of noting the perceptual shift for the participants in our study, who were young designers, is the realisation that colour potentially holds the power to impact on the identity of an architectural form, an interior space, and/or particular elements such as doorways, furniture settings, and the like.
Resumo:
CRTA technology offers better resolution and a more detailed interpretation of the decomposition processes of a clay mineral such as sepiolite via approaching equilibrium conditions of decomposition through the elimination of the slow transfer of heat to the sample as a controlling parameter on the process of decomposition. Constant-rate decomposition processes of non-isothermal nature reveal changes in the sepiolite as the sepiolite is converted to an anhydride. In the dynamic experiment two dehydration steps are observed over the ~20-170 and 170-350°C temperature range. In the dynamic experiment three dehydroxylation steps are observed over the temperature ranges 201-337, 337-638 and 638-982°C. The CRTA technology enables the separation of the thermal decomposition steps.
Resumo:
Computer aided joint replacement surgery has become very popular during recent years and is being done in increasing numbers all over the world. The accuracy of the system depends to a major extent, on accurate registration and immobility of the tracker attachment devices to the bone. This study was designed to asses the forces needed to displace the tracker attachment devices in the bone simulators. Bone simulators were used to maintain the uniformity of the bone structure during the study. The fixation devices tested were 3mm diameter self drilling, self tapping threaded pin, 4mm diameter self tapping cortical threaded pin, 5mm diameter self tapping cancellous threaded pin and a triplanar fixation device ‘ortholock’ used with three 3mm pins. All the devices were tested for pull out, translational and rotational forces in unicortical and bicortical fixation modes. Also tested was the normal bang strength and forces generated by leaning on the devices. The forces required to produce translation increased with the increasing diameter of the pins. These were 105N, 185N, and 225N for the unicortical fixations and 130N, 200N, 225N for the bicortical fixations for 3mm, 4mm and 5mm diameter pins respectively. The forces required to pull out the pins were 1475N, 1650N, 2050N for the unicortical, 1020N, 3044N and 3042N for the bicortical fixated 3mm, 4mm and 5mm diameter pins. The ortholock translational and pull out strength was tested to 900N and 920N respectively and still it did not fail. Rotatory forces required to displace the tracker on pins was to the magnitude of 30N before failure. The ortholock device had rotational forces applied up to 135N and still did not fail. The manual leaning forces and the sudden bang forces generated were of the magnitude of 210N and 150N respectively. The strength of the fixation pins increases with increasing diameter from three to five mm for the translational forces. There is no significant difference in pull out forces of four mm and five mm diameter pins though it is more that the three mm diameter pins. This is because of the failure of material at that stage rather than the fixation device. The rotatory forces required to displace the tracker are very small and much less that that can be produced by the surgeon or assistants in single pins. Although the ortholock device was tested to 135N in rotation without failing, one has to be very careful not to put any forces during the operation on the tracker devices to ensure the accuracy of the procedure.
Resumo:
Machine downtime, whether planned or unplanned, is intuitively costly to manufacturing organisations, but is often very difficult to quantify. The available literature showed that costing processes are rarely undertaken within manufacturing organisations. Where cost analyses have been undertaken, they generally have only valued a small proportion of the affected costs, leading to an overly conservative estimate. This thesis aimed to develop a cost of downtime model, with particular emphasis on the application of the model to Australia Post’s Flat Mail Optical Character Reader (FMOCR). The costing analysis determined a cost of downtime of $5,700,000 per annum, or an average cost of $138 per operational hour. The second section of this work focused on the use of the cost of downtime to objectively determine areas of opportunity for cost reduction on the FMOCR. This was the first time within Post that maintenance costs were considered along side of downtime for determining machine performance. Because of this, the results of the analysis revealed areas which have historically not been targeted for cost reduction. Further exploratory work was undertaken on the Flats Lift Module (FLM) and Auto Induction Station (AIS) Deceleration Belts through the comparison of the results against two additional FMOCR analysis programs. This research has demonstrated the development of a methodical and quantifiable cost of downtime for the FMOCR. This has been the first time that Post has endeavoured to examine the cost of downtime. It is also one of the very few methodologies for valuing downtime costs that has been proposed in literature. The work undertaken has also demonstrated how the cost of downtime can be incorporated into machine performance analysis with specific application to identifying high costs modules. The outcome of this report has both been the methodology for costing downtime, as well as a list of areas for cost reduction. In doing so, this thesis has outlined the two key deliverables presented at the outset of the research.
Resumo:
In this paper, cognitive load analysis via acoustic- and CAN-Bus-based driver performance metrics is employed to assess two different commercial speech dialog systems (SDS) during in-vehicle use. Several metrics are proposed to measure increases in stress, distraction and cognitive load and we compare these measures with statistical analysis of the speech recognition component of each SDS. It is found that care must be taken when designing an SDS as it may increase cognitive load which can be observed through increased speech response delay (SRD), changes in speech production due to negative emotion towards the SDS, and decreased driving performance on lateral control tasks. From this study, guidelines are presented for designing systems which are to be used in vehicular environments.
Resumo:
This research is aimed at addressing problems in the field of asset management relating to risk analysis and decision making based on data from a Supervisory Control and Data Acquisition (SCADA) system. It is apparent that determining risk likelihood in risk analysis is difficult, especially when historical information is unreliable. This relates to a problem in SCADA data analysis because of nested data. A further problem is in providing beneficial information from a SCADA system to a managerial level information system (e.g. Enterprise Resource Planning/ERP). A Hierarchical Model is developed to address the problems. The model is composed of three different Analyses: Hierarchical Analysis, Failure Mode and Effect Analysis, and Interdependence Analysis. The significant contributions from the model include: (a) a new risk analysis model, namely an Interdependence Risk Analysis Model which does not rely on the existence of historical information because it utilises Interdependence Relationships to determine the risk likelihood, (b) improvement of the SCADA data analysis problem by addressing the nested data problem through the Hierarchical Analysis, and (c) presentation of a framework to provide beneficial information from SCADA systems to ERP systems. The case study of a Water Treatment Plant is utilised for model validation.
Resumo:
PPP (Public Private Partnerships) is a new operation mode of infrastructure projects, which usually undergo long periods and have various kinds of risks in technology, market, politics, policy, finance, society, natural conditions and cooperation. So the government and the private agency should establish the risk-sharing mechanism to ensure the successful implementation of the project. As an important branch of the new institutional economics, transaction cost economics and its analysis method have been proved to be beneficial to the proper allocation of risks between the two parts in PPP projects and the improvement of operation efficiency of PPP risk-sharing mechanism. This paper analyzed the transaction cost of the projects risk-sharing method and the both risk carriers. It pointed out that the risk-sharing method of PPP projects not only reflected the spirit of cooperation between public sector and private agency, but also minimized the total transaction cost of the risk sharing mechanism itself. Meanwhile, the risk takers had to strike a balance between the beforehand cost and the afterwards cost so as to control the cost of risk management. The paper finally suggested three ways which might be useful to reduce the transaction cost: to choose appropriate type of contract of PPP risk-sharing mechanism, to prevent information asymmetry and to establish mutual trust between the two participants.