913 resultados para Drilling process monitoring
Resumo:
Due to an ever increasing demand for more frequent and higher volume of train service, the physical conditions of tracks in modem railways are deteriorating more quickly when compared to tracks built decades ago. There are incidences in both the UK and Hong Kong indicating there are needs for a more stringent checks on the rail conditions using suitable and effective non-invasive and nondestructive condition monitoring system.
Resumo:
The demand for high quality rail services in the twenty-first century has put an ever increasing demand on all rail operators. In order to meet the expectation of their patrons, the maintenance regime of railway systems has to be tightened up, the track conditions have to be well looked after, the rolling stock must be designed to withstand heavy duty. In short, in an ideal world where resources are unlimited, one needs to implement a very rigorous inspection regime in order to take care of the modem needs of a railway system [1]. If cost were not an issue, the maintenance engineers could inspect the train body by the most up-to-date techniques such as ultra-sound examination, x-ray inspection, magnetic particle inspection, etc. on a regular basis. However it is inconceivable to have such a perfect maintenance regime in any commercial railway. Likewise, it is impossible to have a perfect rolling stock which can weather all the heavy duties experienced in a modem railway. Hence it is essential that some condition monitoring schemes are devised to pick up potential defects which could manifest into safety hazards. This paper introduces an innovative condition monitoring system for track profile and, together with an instrumented car to carry out surveillance of the track, will provide a comprehensive railway condition monitoring system which is free from the usual difficulty of electromagnetic compatibility issues in a typical railway environment
Resumo:
Acoustic emission (AE) technique is one of the popular diagnostic techniques used for structural health monitoring of mechanical, aerospace and civil structures. But several challenges still exist in successful application of AE technique. This paper explores various tools for analysis of recorded AE data to address two primary challenges: discriminating spurious signals from genuine signals and devising ways to quantify damage levels.
Resumo:
Research investigating the transactional approach to the work stressor-employee adjustment relationship has described many negative main effects between perceived stressors in the workplace and employee outcomes. A considerable amount of literature, theoretical and empirical, also describes potential moderators of this relationship. Organizational identification has been established as a significant predictor of employee job-related attitudes. To date, research has neglected investigation of the potential moderating effect of organizational identification in the work stressor-employee adjustment relationship. On the basis of identity, subjective fit and sense of belonging literature it was predicted that higher perceptions of identification at multiple levels of the organization would mitigate the negative effect of work stressors on employee adjustment. It was expected, further, that more proximal, lower order identifications would be more prevalent and potent as buffers of stressors on strain. Predictions were tested with an employee sample from five organizations (N = 267). Hierarchical moderated multiple regression analyses revealed some support for the stress-buffering effects of identification in the prediction of job satisfaction and organizational commitment, particularly for more proximal (i.e., work unit) identification. These positive stress-buffering effects, however, were present for low identifiers in some situations. The present study represents an extension of the application of organizational identity theory by identifying the effects of organizational and workgroup identification on employee outcomes in the nonprofit context. Our findings will contribute to a better understanding of the dynamics in nonprofit organizations and therefore contribute to the development of strategy and interventions to deal with identity-based issues in nonprofits.
Resumo:
This paper discusses diesel engine condition monitoring (CM) using acoustic emissions (AE) as well as some of the commonly encountered diesel engine problems. Also discussed are some of the underlying combustion related faults and the methods used in past studies to simulate diesel engine faults. The initial test involved an experimental simulation of two common combustion related diesel engine faults, namely diesel knock and misfire. These simulated faults represent the first step towards a comprehensive investigation and analysis into the characteristics of acoustic emission signals arising from combustion related diesel engine faults. Data corresponding to different engine running conditions was captured using in-cylinder pressure, vibration and acoustic emission transducers along with both crank angle encoder and top-dead centre (TDC) signals. Using these signals, it was possible to characterise the effect of different combustion conditions and hence, various diesel engine in-cylinder pressure profiles.
Resumo:
This paper presents early results from a pilot project which aims to investigate the relationship between proprietary structure of small and medium- sized Italian family firms and their owners’ orientation towards a “business evaluation process”. Evidence from many studies point out the importance of family business in a worldwide economic environment: in Italy 93% of the businesses are represented by family firms; 98% of them have less than 50 employees (Italian Association of Family Firms, 2004) so we judged family SMEs as a relevant field of investigation. In this study we assume a broad definition of family business as “a firm whose control (50% of shares or voting rights) is closely held by the members of the same family” (Corbetta,1995). “Business evaluation process” is intended here both as “continuous evaluation process” (which is the expression of a well developed managerial attitude) or as an “immediate valuation” (i.e. in the case of new shareholder’s entrance, share exchange among siblings, etc). We set two hypotheses to be tested in this paper: the first is “quantitative” and aims to verify whether the number of owners (independent variable) in a family firm is positively correlated to the business evaluation process. If a family firm is led by only one subject, it is more likely that personal values, culture and feelings may affect his choices more than “purely economic opportunities”; so there is less concern about monitoring economic performance or about the economic value of the firm. As the shareholders’ number increases, economic aspects in managing the firm grow in importance over the personal values and "value orientation" acquires a central role. The second hypothesis investigates if and to what extent the presence of “non- family members” among the owners affects their orientation to the business evaluation process. The “Cramer’s V” test has been used to test the hypotheses; both were not confirmed from these early results; next steps will lead to make an inferential analysis on a representative sample of the population.
Resumo:
Business process model repositories capture precious knowledge about an organization or a business domain. In many cases, these repositories contain hundreds or even thousands of models and they represent several man-years of effort. Over time, process model repositories tend to accumulate duplicate fragments, as new process models are created by copying and merging fragments from other models. This calls for methods to detect duplicate fragments in process models that can be refactored as separate subprocesses in order to increase readability and maintainability. This paper presents an indexing structure to support the fast detection of clones in large process model repositories. Experiments show that the algorithm scales to repositories with hundreds of models. The experimental results also show that a significant number of non-trivial clones can be found in process model repositories taken from industrial practice.
Resumo:
This study investigated preservice teachers’ perceptions for teaching and sustaining gifted and talented students while developing, modifying and implementing activities to cater for the diverse learner. Participants were surveyed at the end of a gifted and talented education program on their perceptions to differentiate the curriculum for meeting the needs of the student (n=22). SPSS data analysis with the five-part Likert scale indicated these preservice teachers agreed or strongly agreed they had developed skills in curriculum planning (91%) with well-designed activities (96%), and lesson preparation skills (96%). They also claimed they were enthusiastic for teaching (91%) and understanding of school practices and policies (96%). However, 46% agreed they had knowledge of syllabus documents with 50% claiming an ability to provide written feedback on student’s learning. Furthermore, nearly two-thirds suggested they had educational language from the syllabus and effective student management strategies. Preservice teachers require more direction on how to cater for diversity and begin creating sustainable societies by building knowledge from direct GAT experiences. Designing diagnostic surveys associated with university coursework can be used to determine further development for specific preservice teacher development in GAT education. Preservice teachers need to create opportunities for students to realise their potential by involving cognitive challenges through a differentiated curriculum. Differentiation requires modification of four primary areas of curriculum development (Maker, 1975) content (what we teach), process (how we teach), product (what we expect the students to do or show) and learning environment (where we teach/our class culture). Ashman and Elkins (2009) and Glasson (2008) emphasise the need for preservice teachers, teachers and other professionals to be able to identify what gifted and talented (GAT) students know and how they learn in relation to effective teaching. Glasson (2008) recommends that educators keep up to date with practices in pedagogy, support, monitoring and profiling of GAT students to create an environment conducive to achieving. Oral feedback is one method to communicate to learners about their progress but has advantages and disadvantages for some students. Oral feedback provides immediate information to the student on progress and performance (Ashman & Elkins, 2009). However, preservice teachers must have clear understandings of key concepts to assist the GAT student. Implementing teaching strategies to engage innovate and extend students is valuable to the preservice teacher in focusing on GAT student learning in the classroom (Killen, 2007). Practical teaching strategies (Harris & Hemming, 2008; Tomlinson et al., 1994) facilitate diverse ways for assisting GAT students to achieve learning outcomes. Such strategies include activities to enhance creativity, co-operative learning and problem-solving activities (Chessman, 2005; NSW Department of Education and Training, 2004; Taylor & Milton, 2006) for GAT students to develop a sense of identity, belonging and self esteem towards becoming an autonomous learner. Preservice teachers need to understand that GAT students learn in a different way and therefore should be assessed differently. Assessment can be through diverse options to demonstrate the student’s competence, demonstrate their understanding of the material in a way that highlights their natural abilities (Glasson, 2008; Mack, 2008). Preservice teachers often are unprepared to assess students understanding but this may be overcome with teacher education training promoting effective communication and collaboration in the classroom, including the provision of a variety of assessment strategies to improve teaching and learning (Callahan et al., 2003; Tomlinson et al., 1994). It is also critical that preservice teachers have enthusiasm for teaching to demonstrate inclusion, involvement and the excitement to communicate to GAT students in the learning process (Baum, 2002). Evaluating and reflecting on teaching practices must be part of a preservice teacher’s repertoire for GAT education. Evaluating teaching practices can assist to further enhance student learning (Mayer, 2008). Evaluation gauges the success or otherwise of specific activities and teaching in general (Mayer, 2008), and ensures that preservice teachers and teachers are well prepared and maintain their commitment to their students and the community. Long and Harris (1999) advocate that reflective practices assist teachers in creating improvements in educational practices. Reflective practices help preservice teachers and teachers to improve their ability to pursue improved learning outcomes and professional growth (Long & Harris, 1999). Context This study is set at a small regional campus of a large university in Queensland. As a way to address departmental policies and the need to prepare preservice teachers for engaging a diverse range of learners (see Queensland College of Teachers, Professional Standards for Teachers, 2006), preservice teachers at this campus completed four elective units within their Bachelor of Education (primary) degree. The electives include: 1. Middle years students and schools 2. Teaching strategies for engaging learners 3. Teaching students with learning difficulties, and 4. Middle-years curriculum, pedagogy and assessment. In the university-based component of this unit, preservice teachers engaged in learning about middle years students and schools, and gained knowledge of government policies pertaining to GAT students. Further explored within in this unit was the importance of: collaboration between teachers, parents/carers and school personnel in supporting middle years GAT students; incorporating challenging learning experiences that promoted higher order thinking and problem solving skills; real world learning experiences for students and; the alignment and design of curriculum, pedagogy and assessment that is relevant to the students development, interests and needs. The participants were third-year Bachelor of Education (primary) preservice teachers who were completing an elective unit as part of the middle years of schooling learning with a focus on GAT students. They were assigned one student from a local school. In the six subsequent ninety minute weekly lessons, the preservice teachers were responsible for designing learning activities that would engage and extend the GAT students. Furthermore, preservice teachers made decisions about suitable pedagogical approaches and designed the assessment task to align with the curriculum and the developmental needs of their middle years GAT student. This research aims to describe preservice teachers’ perceptions of their education for teaching gifted and talented students.
Resumo:
As organizations reach higher levels of Business Process Management maturity, they tend to accumulate large collections of process models. These repositories may contain thousands of activities and be managed by different stakeholders with varying skills and responsibilities. However, while being of great value, these repositories induce high management costs. Thus, it becomes essential to keep track of the various model versions as they may mutually overlap, supersede one another and evolve over time. We propose an innovative versioning model and associated storage structure, specifically designed to maximize sharing across process model versions, and to automatically handle change propagation. The focal point of this technique is to version single process model fragments, rather than entire process models. Indeed empirical evidence shows that real-life process model repositories have numerous duplicate fragments. Experiments on two industrial datasets confirm the usefulness of our technique.
Resumo:
Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business process model repositories. For example, in some cases new process models may be derived from existing models, thus finding these models and adapting them may be more effective and less error-prone than developing them from scratch. Since process model repositories may be large, query evaluation may be time consuming. Hence, we investigate the use of indexes to speed up this evaluation process. To make our approach more applicable, we consider the semantic similarity between labels. Experiments are conducted to demonstrate that our approach is efficient.
Resumo:
Business process models are becoming available in large numbers due to their popular use in many industrial applications such as enterprise and quality engineering projects. On the one hand, this raises a challenge as to their proper management: How can it be ensured that the proper process model is always available to the interested stakeholder? On the other hand, the richness of a large set of process models also offers opportunities, for example with respect to the re-use of existing model parts for new models. This paper describes the functionalities and architecture of an advanced process model repository, named APROMORE. This tool brings together a rich set of features for the analysis, management and usage of large sets of process models, drawing from state-of-the art research in the field of process modeling. A prototype of the platform is presented in this paper, demonstrating its feasibility, as well as an outlook on the further development of APROMORE.
Resumo:
Climate change is becoming increasingly apparent that is largely caused by human activities such as asset management processes, from planning to disposal, of property and infrastructure. One essential component of asset management process is asset identification. The aims of the study are to identify the information needed in asset identification and inventory as one of public asset management process in addressing the climate change issue; and to examine its deliverability in developing countries’ local governments. In order to achieve its aims, this study employs a case study in Indonesia. This study only discusses one medium size provincial government in Indonesia. The information is gathered through interviews of the local government representatives in South Sulawesi Province, Indonesia and document analysis provided by interview participants. The study found that for local government, improving the system in managing their assets is one of emerging biggest challenge. Having the right information in the right place and at the right time are critical factors in response to this challenge. Therefore, asset identification as the frontline step in public asset management system is holding an important and critical role. Furthermore, an asset identification system should be developed to support the mainstream of adaptation to climate change vulnerability and to help local government officers to be environmentally sensitive. Finally, findings from this study provide useful input for the policy makers, scholars and asset management practitioners to develop an asset inventory system as a part of public asset management process in addressing the climate change.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
This paper presents a comprehensive discussion of vegetation management approaches in power line corridors based on aerial remote sensing techniques. We address three issues 1) strategies for risk management in power line corridors, 2) selection of suitable platforms and sensor suite for data collection and 3) the progress in automated data processing techniques for vegetation management. We present initial results from a series of experiments and, challenges and lessons learnt from our project.
Resumo:
Acoustic emission (AE) is the phenomenon where high frequency stress waves are generated by rapid release of energy within a material by sources such as crack initiation or growth. AE technique involves recording these stress waves by means of sensors placed on the surface and subsequent analysis of the recorded signals to gather information such as the nature and location of the source. It is one of the several diagnostic techniques currently used for structural health monitoring (SHM) of civil infrastructure such as bridges. Some of its advantages include ability to provide continuous in-situ monitoring and high sensitivity to crack activity. But several challenges still exist. Due to high sampling rate required for data capture, large amount of data is generated during AE testing. This is further complicated by the presence of a number of spurious sources that can produce AE signals which can then mask desired signals. Hence, an effective data analysis strategy is needed to achieve source discrimination. This also becomes important for long term monitoring applications in order to avoid massive date overload. Analysis of frequency contents of recorded AE signals together with the use of pattern recognition algorithms are some of the advanced and promising data analysis approaches for source discrimination. This paper explores the use of various signal processing tools for analysis of experimental data, with an overall aim of finding an improved method for source identification and discrimination, with particular focus on monitoring of steel bridges.