222 resultados para Misalignment
Resumo:
In an attempt to deal with the potential problems presented by existing information systems, a shift towards the implementation of ERP packages has been witnessed. The common view, particularly the one espoused by vendors, is that ERP packages are most successfully implemented when the standard model is adopted. Yet, despite this, customisation activity still occurs reportedly due to misalignment between the functionality of the package and the requirements of those in the implementing organisation. However, it is recognised that systems development and organisational decision-making are activities influenced by the perspectives of the various groups and individuals involved in the process. Thus, as customisation can be seen as part of systems development, and has to be decided upon, it should be thought about in the same way. In this study, two ERP projects are used to examine different reasons why customisation might take place. These reasons are then built upon through reference to the ERP and more general packaged software literature. The study suggests that whilst a common reason for customising ERP packages might be concerned with functionality misfits, it is important to look further into why these may occur, as there are clearly other reasons for customisation stemming from the multiplicity of social groups involved in the process.
Resumo:
Parabolic trough concentrator collector is the most matured, proven and widespread technology for the exploitation of the solar energy on a large scale for middle temperature applications. The assessment of the opportunities and the possibilities of the collector system are relied on its optical performance. A reliable Monte Carlo ray tracing model of a parabolic trough collector is developed by using Zemax software. The optical performance of an ideal collector depends on the solar spectral distribution and the sunshape, and the spectral selectivity of the associated components. Therefore, each step of the model, including the spectral distribution of the solar energy, trough reflectance, glazing anti-reflection coating and the absorber selective coating is explained and verified. Radiation flux distribution around the receiver, and the optical efficiency are two basic aspects of optical simulation are calculated using the model, and verified with widely accepted analytical profile and measured values respectively. Reasonably very good agreement is obtained. Further investigations are carried out to analyse the characteristics of radiation distribution around the receiver tube at different insolation, envelop conditions, and selective coating on the receiver; and the impact of scattered light from the receiver surface on the efficiency. However, the model has the capability to analyse the optical performance at variable sunshape, tracking error, collector imperfections including absorber misalignment with focal line and de-focal effect of the absorber, different rim angles, and geometric concentrations. The current optical model can play a significant role in understanding the optical aspects of a trough collector, and can be employed to extract useful information on the optical performance. In the long run, this optical model will pave the way for the construction of low cost standalone photovoltaic and thermal hybrid collector in Australia for small scale domestic hot water and electricity production.
Resumo:
Introduction The consistency of measuring small field output factors is greatly increased by reporting the measured dosimetric field size of each factor, as opposed to simply stating the nominal field size [1] and therefore requires the measurement of cross-axis profiles in a water tank. However, this makes output factor measurements time consuming. This project establishes at which field size the accuracy of output factors are not affected by the use of potentially inaccurate nominal field sizes, which we believe establishes a practical working definition of a ‘small’ field. The physical components of the radiation beam that contribute to the rapid change in output factor at small field sizes are examined in detail. The physical interaction that dominates the cause of the rapid dose reduction is quantified, and leads to the establishment of a theoretical definition of a ‘small’ field. Methods Current recommendations suggest that radiation collimation systems and isocentre defining lasers should both be calibrated to permit a maximum positioning uncertainty of 1 mm [2]. The proposed practical definition for small field sizes is as follows: if the output factor changes by ±1.0 % given a change in either field size or detector position of up to ±1 mm then the field should be considered small. Monte Carlo modelling was used to simulate output factors of a 6 MV photon beam for square fields with side lengths from 4.0 to 20.0 mm in 1.0 mm increments. The dose was scored to a 0.5 mm wide and 2.0 mm deep cylindrical volume of water within a cubic water phantom, at a depth of 5 cm and SSD of 95 cm. The maximum difference due to a collimator error of ±1 mm was found by comparing the output factors of adjacent field sizes. The output factor simulations were repeated 1 mm off-axis to quantify the effect of detector misalignment. Further simulations separated the total output factor into collimator scatter factor and phantom scatter factor. The collimator scatter factor was further separated into primary source occlusion effects and ‘traditional’ effects (a combination of flattening filter and jaw scatter etc.). The phantom scatter was separated in photon scatter and electronic disequilibrium. Each of these factors was plotted as a function of field size in order to quantify how each affected the change in small field size. Results The use of our practical definition resulted in field sizes of 15 mm or less being characterised as ‘small’. The change in field size had a greater effect than that of detector misalignment. For field sizes of 12 mm or less, electronic disequilibrium was found to cause the largest change in dose to the central axis (d = 5 cm). Source occlusion also caused a large change in output factor for field sizes less than 8 mm. Discussion and conclusions The measurement of cross-axis profiles are only required for output factor measurements for field sizes of 15 mm or less (for a 6 MV beam on Varian iX linear accelerator). This is expected to be dependent on linear accelerator spot size and photon energy. While some electronic disequilibrium was shown to occur at field sizes as large as 30 mm (the ‘traditional’ definition of small field [3]), it has been shown that it does not cause a greater change than photon scatter until a field size of 12 mm, at which point it becomes by far the most dominant effect.
Resumo:
The literature around Library 2.0 remains largely theoretical with few empirically studies and is particularly limited in developing countries such as Indonesia. This study addresses this gap and aims to provide information about the current state of knowledge on Indonesian LIS professionals’ understanding of Library 2.0. The researchers used qualitative and quantitative approaches for this study, asking thirteen closed- and open-ended questions in an online survey. The researchers used descriptive and in vivo coding to analyze the responses. Through their analysis, they identified three themes: technology, interactivity, and awareness of Library 2.0. Respondents demonstrated awareness of Library 2.0 and a basic understanding of the roles of interactivity and technology in libraries. However, overreliance on technology used in libraries to conceptualize Library 2.0 without an emphasis on its core characteristics and principles could lead to the misalignment of limited resources. The study results will potentially strengthen the research base for Library 2.0 practice, as well as inform LIS curriculum in Indonesia so as to develop practitioners who are able to adapt to users’ changing needs and expectations. It is expected that the preliminary data of this study could be used to design a much larger and more complex future research project in this area.
Resumo:
Abstract Legacy information systems evolved incrementally in response to changes in business strategy and information technology. Organizations are now being forced to change much more radically and quickly than previously and this change places new demands on information systems. Legacy information systems are usually considered from a technical perspective, addressing issues such as age, complexity, maintainability, design and technology. We wish to demonstrate that the business dimension to legacy information systems, represented by the organisation structure, business processes and procedures that are bound up in the design and operation of the existing IT systems, is also significant. This paper identifies the important role of legacy information systems in the formation of new strategies. We show that the move away from a stable to an unstable business environment accelerates the rate of change. Furthermore, the gap between what the legacy information systems can deliver and the strategic vision of the organization widens when the legacy information systems are unable to adapt to meet the new requirements. An analysis of fifteen case studies provides evidence that legacy information systems include business and technical dimensions and that the systems can present problems when there is a misalignment between the strategic vision of the business, the IT legacy and the old business model embodied in the legacy.
Resumo:
In recent years, research aimed at identifying and relating the antecedents and consequences of diffusing organizational practices/ideas has turned its attention to debating the international adoption and implementation of the Anglo-American model of corporate governance, i.e., a shareholder-value-orientation (SVO). While financial economists characterize the adoption of an SVO as necessary and performance-enhancing, behavioral scientists have disputed such claims, invoking institutional contingencies in the appropriateness of an SVO. Our study seeks to provide some resolution to the debate by developing an overarching socio-political perspective that links the antecedents and consequences of the adoption of the contested practice of SVO. We test our framework using extensive longitudinal data from 1992-2006 from the largest listed corporations in the Netherlands, and we find a negative relationship between SVO adoption and subsequent firm performance, although this effect is attenuated when accompanied by greater SVO-alignment among major owners and a firm’s visible commitment to an SVO. This study extends prior research on the diffusion of contested organizational practices that has taken a socio-political perspective by offering an original contingency perspective that addresses how and why the misaligned preferences of corporate owners will affect (i) a company’s inclination to espouse an SVO, and (ii) the performance consequences of such misalignment.This study suggests when board members are considering the adoption of new ideas/practices (e.g., SVO), they should consider the contextual fitness of the idea/practice with the firm’s owners and their interests.
Resumo:
Achieving high efficiency with improved power transfer range and misalignment tolerance is the major design challenge in realizing Wireless Power Transfer (WPT) systems for industrial applications. Resonant coils must be carefully designed to achieve highest possible system performance by fully utilizing the available space. High quality factor and enhanced electromagnetic coupling are key indices which determine the system performance. In this paper, design parameter extraction and quality factor optimization of multi layered helical coils are presented using finite element analysis (FEA) simulations. In addition, a novel Toroidal Shaped Spiral (TSS) coil is proposed to increase power transfer range and misalignment tolerance. The proposed shapes and recommendations can be used to design high efficiency WPT resonator in a limited space.
Resumo:
Budgeting is an important means of controlling ones finances and reducing debt. This paper outlines our work towards designing more user centred technology for individual and household budgeting. Based on an ethnographically informed study with 15 participants, we highlight a misalignment between people's actual budgeting practices and those supported by off-the-shelf budgeting aids. In addressing this misalignment we outline three tenets that may be incorporated into future work in this area. These include (1) catering for the different phases of engagement with technology; (2) catering for the practices of hiding and limiting access to money, and; (3) integrating materiality into technical solutions.
Resumo:
In the field of face recognition, sparse representation (SR) has received considerable attention during the past few years, with a focus on holistic descriptors in closed-set identification applications. The underlying assumption in such SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such an assumption is easily violated in the face verification scenario, where the task is to determine if two faces (where one or both have not been seen before) belong to the same person. In this study, the authors propose an alternative approach to SR-based face verification, where SR encoding is performed on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which then form an overall face descriptor. Owing to the deliberate loss of spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment and various image deformations. Within the proposed framework, they evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN) and an implicit probabilistic technique based on Gaussian mixture models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, on both the traditional closed-set identification task and the more applicable face verification task. The experiments also show that l1-minimisation-based encoding has a considerably higher computational cost when compared with SANN-based and probabilistic encoding, but leads to higher recognition rates.
Resumo:
The literature around Library 2.0 remains largely theoretical with few empirical studies and is particularly limited in developing countries such as Indonesia. This study addresses this gap and aims to provide information about the current state of knowledge on Indonesian LIS professionals’ understanding of Library 2.0. The researchers used qualitative and quantitative approaches for this study, asking thirteen closed- and open-ended questions in an online survey. The researchers used descriptive and in vivo coding to analyze the responses. Through their analysis, they identified three themes: technology, interactivity, and awareness of Library 2.0. Respondents demonstrated awareness of Library 2.0 and a basic understanding of the roles of interactivity and technology in libraries. However, overreliance on technology used in libraries to conceptualize Library 2.0 without an emphasis on its core characteristics and principles could lead to the misalignment of limited resources. The study results will potentially strengthen the research base for Library 2.0 practice as well as inform LIS curriculum in Indonesia so as to develop practitioners who are able to adapt to users’ changing needs and expectations. It is expected that the preliminary data from this study could be used to design a much larger and more complex future research project in this area.
Resumo:
Background Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. Methods The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May–September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. Results The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Conclusions Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding the dynamics of the cognitive process can inform the design of interventions to manage errors and improve residents’ safety.
Resumo:
The dissertation examines how emotional experiences are oriented to in the details of psychotherapeutic interaction. The data (57 audio recorded sessions) come from one therapist-patient dyad in cognitive psychotherapy. Conversation analysis is used as method. The dissertation consists of 4 original articles and a summary. The analyses explicate the therapist s practices of responding to the patient s affective expressions. Different types of affiliating responses are identified. It is shown that the affiliating responses are combined with, or build grounds for, more interpretive and challenging actions. The study also includes a case study of a session with strong misalignment between the therapist s and patient s orientations, showing how this misalignment is managed by the therapist. Moreover, through a longitudinal analysis of the transformation of a sequence type, the study suggests that therapeutic change processes can be located to sequential relations of actions. The practices found in this study are compared to earlier research on everyday talk and on medical encounters. It is suggested that in psychotherapeutic interaction, the generic norms of interaction considering affiliation and epistemic access, are modified for the purposes of therapeutic work. The study also shows that the practices of responding to emotional experience in psychotherapy can deviate from the everyday practices of affiliation. The results of the study are also discussed in terms of concepts arising from clinical theory. These include empathy, validation of emotion, therapeutic alliance, interpretation, challenging beliefs, and therapeutic change. The therapist s approach described in this study involves practical integration of different clinical theories. In general terms, the study suggests that in the details of interaction, psychotherapy recurrently performs a dual task of empathy and challenging in relation to the patient s ways of describing their experiences. Methodologically, the study discusses the problem of identifying actions in conversation analysis of psychotherapy and emotional interaction, and the possibility to apply conversation analysis in the study of therapeutic change.
Resumo:
Infrared Earth sensors are used in spacecraft for attitude sensing. Their accuracy is limited by systematic and random errors. Dominant sources of systematic errors are analyzed for a typical scanning infrared Earth sensor used in a remote-sensing satellite in a 900-km sun-synchronous orbit. The errors considered arise from 1) seasonable variation of infrared radiation, 2) oblate shape of the Earth, 3) ambient temperature of sensors, 4) changes in spin/scan period, and 5) misalignment of the axis of the sensors. Simple relations are derived using least-squares curve fitting for onboard correction of these errors. With these, it is possible to improve the accuracy of attitude determination by eight fold and achieve performance comparable to ground-based post-facto attitude computation.
Resumo:
The existence of an icosahedral phase in Mg−Al−Ag is better understood on a crystallographic basis rather than on a quantum structural diagram basis. The quasicrystalline structure is delineated in terms of quasiperiodic arrangement of Pauling triacontahedra, which can be identified in the equilibrium structure. Subtle differences in the electron diffraction patterns have been recorded compared to the ideal quasicrystalline pattern. The misalignment of spots and distortions are better attributed to higher order rational approximate structure than anisotropic phason strain. Ares of diffuse intensity have been related to the ordering among the atoms in the clusters.
Resumo:
Trajectory optimization of a generic launch vehicle is considered in this paper. The trajectory from launch point to terminal injection point is divided in to two segments. The first segment deals with launcher clearance and vertical raise of the vehicle. During this phase, a nonlinear feedback guidance loop is incorporated to assure vertical raise in presence of thrust misalignment, centre of gravity offset, wind disturbance etc. and possibly to clear obstacles as well. The second segment deals with the trajectory optimization, where the objective is to ensure desired terminal conditions as well as minimum control effort and minimum structural loading in the high dynamic pressure region. The usefulness of this dynamic optimization problem formulation is demonstrated by solving it using the classical Gradient method. Numerical results for both the segments are presented, which clearly brings out the potential advantages of the proposed approach.