936 resultados para Reliability in automation
Resumo:
Wide and ‘skip row’ row configurations have been used as a means to improve yield reliability in grain sorghum production. However, there has been little effort put to design of these systems in relation to optimal combinations of root system characteristics and row configuration, largely because little is known about root system characteristics. The studies reported here aimed to determine the potential extent of root system exploration in skip row systems. Field experiments were conducted under rain-out shelters and the extent of water extraction and root system growth measured. One experiment was conducted using widely-spaced twin rows grown in the soil. The other experiment involved the use of specially constructed large root observation chambers for single plants. It was found that the potential extent of root system exploration in sorghum was beyond 2m from the planted rows using conventional hybrids and that root exploration continued during grain filling. Preliminary data suggested that the extent of water extraction throughout this region depended on root length density and the balance between demand for, and supply of, water. The results to date suggest that simultaneous genetic and management manipulation of wide row production systems might lead to more effective and reliable production in specific environments. Further study of variation in root-shoot dynamics and root system characteristics is required to exploit possible opportunities.
Resumo:
The purpose of this thesis is to conduct empirical research in corporate Thailand in order to (1) validate the Spirit at Work Scale (2) investigate the relationships between individual spirit at work and three employee work attitudinal variables (job satisfaction, organisational identification and psychological well-being) and three organisational outcomes (in-role performance, organisational citizenship behaviours (OCB), and turnover intentions) (3) further examine causal relations among these organisational behaviour variables with a longitudinal design (4) examine three employee work attitudes as mediator variables between individual spirit at work and three organisational outcomes and (5) explore the potential antecedents of organisational conditions that foster employee experienced individual spirit at work. The two pilot studies with 155 UK and 175, 715 Thai samples were conducted for validation testing of the main measure used in this study: Spirit at Work Scale (Kinjerski & Skrypnek, 2006a). The results of the two studies including discriminant validity analyses strongly provided supportive evidence that Spirit at Work Scale (SAWS) is a sound psychometric measure and also a distinct construct from the three work attitude constructs. The final model of SAWS contains a total of twelve items; a three factor structure (meaning in work, sense of community, and spiritual connection) in which the sub-factors loaded on higher order factors and also had very acceptable reliability. In line with these results it was decided to use the second-order of SAWS model for Thai samples in the main study and subsequent analysis. The 715 completed questionnaires were received from the first wave of data collection during July - August 2008 and the second wave was conducted again within the same organisations and 501 completed questionnaires were received during March - April 2009. Data were obtained through 49 organisations which were from three types of organisations within Thailand: public organisations, for-profit organisations, and notfor-profit organisations. Confirmatory factor analysis of all measures used in the study and hypothesised model were tested with structural equation modelling techniques. The results were greatly supportive for the direct structural model and partially supportive for the fully mediated model. Moreover, there were different findings across self report and supervisor rating on performance and OCB models. Additionally, the antecedent conditions that fostered employees experienced individual spirit at work and the implications of these findings for research and practice are discussed.
Resumo:
Two key issues defined the focus of this research in manufacturing plasmid DNA for use In human gene therapy. First, the processing of E.coli bacterial cells to effect the separation of therapeutic plasmid DNA from cellular debris and adventitious material. Second, the affinity purification of the plasmid DNA in a Simple one-stage process. The need arises when considering the concerns that have been recently voiced by the FDA concerning the scalability and reproducibility of the current manufacturing processes in meeting the quality criteria of purity, potency, efficacy, and safety for a recombinant drug substance for use in humans. To develop a preliminary purification procedure, an EFD cross-flow micro-filtration module was assessed for its ability to effect the 20-fold concentration, 6-time diafiltration, and final clarification of the plasmid DNA from the subsequent cell lysate that is derived from a 1 liter E.coli bacterial cell culture. Historically, the employment of cross-flow filtration modules within procedures for harvesting cells from bacterial cultures have failed to reach the required standards dictated by existing continuous centrifuge technologies, frequently resulting in the rapid blinding of the membrane with bacterial cells that substantially reduces the permeate flux. By challenging the EFD module, containing six helical wound tubular membranes promoting centrifugal instabilities known as Dean vortices, with distilled water between the Dean number's of 187Dn and 818Dn,and the transmembrane pressures (TMP) of 0 to 5 psi. The data demonstrated that the fluid dynamics significantly influenced the permeation rate, displaying a maximum at 227Dn (312 Imh) and minimum at 818Dn (130 Imh) for a transmembrane pressure of 1 psi. Numerical studies indicated that the initial increase and subsequent decrease resulted from a competition between the centrifugal and viscous forces that create the Dean vortices. At Dean numbers between 187Dn and 227Dn , the forces combine constructively to increase the apparent strength and influence of the Dean vortices. However, as the Dean number in increases above 227 On the centrifugal force dominates the viscous forces, compressing the Dean vortices into the membrane walls and reducing their influence on the radial transmembrane pressure i.e. the permeate flux reduced. When investigating the action of the Dean vortices in controlling tile fouling rate of E.coli bacterial cells, it was demonstrated that the optimum cross-flow rate at which to effect the concentration of a bacterial cell culture was 579Dn and 3 psi TMP, processing in excess of 400 Imh for 20 minutes (i.e., concentrating a 1L culture to 50 ml in 10 minutes at an average of 450 Imh). The data demonstrated that there was a conflict between the Dean number at which the shear rate could control the cell fouling, and the Dean number at which tile optimum flux enhancement was found. Hence, the internal geometry of the EFD module was shown to sub-optimal for this application. At 579Dn and 3 psi TMP, the 6-fold diafiltration was shown to occupy 3.6 minutes of process time, processing at an average flux of 400 Imh. Again, at 579Dn and 3 psi TMP the clarification of the plasmid from tile resulting freeze-thaw cell lysate was achieved at 120 Iml1, passing 83% (2,5 mg) of the plasmid DNA (6,3 ng μ-1 10.8 mg of genomic DNA (∼23,00 Obp, 36 ng μ-1 ), and 7.2 mg of cellular proteins (5-100 kDa, 21.4 ngμ-1 ) into the post-EFD process stream. Hence the EFD module was shown to be effective, achieving the desired objectives in approximately 25 minutes. On the basis of its ability to intercalate into low molecular weight dsDNA present in dilute cell lysates, and be electrophoresed through agarose, the fluorophore PicoGreen was selected for the development of a suitable dsDNA assay. It was assesseel for its accuracy, and reliability, In determining the concentration and identity of DNA present in samples that were eleclrophoresed through agarose gels. The signal emitted by intercalated PicoGreen was shown to be constant and linear, and that the mobility of the PicaGreen-DNA complex was not affected by the intercalation. Concerning the secondary purification procedure, various anion-exchange membranes were assessed for their ability to capture plasmid DNA from the post-EFD process stream. For a commercially available Sartorius Sartobind Q15 membrane, the reduction in the equilibriumbinding capacity for ctDNA in buffer of increasing ionic demonstrated that DNA was being.adsorbed by electrostatic interactions only. However, the problems associated with fluid distribution across the membrane demonstrated that the membrane housing was the predominant cause of the .erratic breakthrough curves. Consequently, this would need to be rectified before such a membrane could be integrated into the current system, or indeed be scaled beyond laboratory scale. However, when challenged with the process material, the data showed that considerable quantities of protein (1150 μg) were adsorbed preferentially to the plasmid DNA (44 μg). This was also shown for derived Pall Gelman UltraBind US450 membranes that had been functionalised by varying molecular weight poly-L~lysine and polyethyleneimine ligands. Hence the anion-exchange membranes were shown to be ineffective in capturing plasmid DNA from the process stream. Finally, work was performed to integrate a sequence-specific DNA·binding protein into a single-stage DNA chromatography, isolating plasmid DNA from E.coli cells whilst minimising the contamination from genomic DNA and cellular protein. Preliminary work demonstrated that the fusion protein was capable of isolating pUC19 DNA into which the recognition sequence for the fusion-protein had been inserted (pTS DNA) when in the presence of the conditioned process material. Althougth the pTS recognition sequence differs from native pUC19 sequences by only 2 bp, the fusion protein was shown to act as a highly selective affinity ligand for pTS DNA alone. Subsequently, the scale of the process was scaled 25-fold and positioned directly following the EFD system. In conclusion, the integration of the EFD micro-filtration system and zinc-finger affinity purification technique resulted in the capture of approximately 1 mg of plasmid DNA was purified from 1L of E.coli culture in a simple two stage process, resulting in the complete removal of genomic DNA and 96.7% of cellular protein in less than 1 hour of process time.
Resumo:
Task classification is introduced as a method for the evaluation of monitoring behaviour in different task situations. On the basis of an analysis of different monitoring tasks, a task classification system comprising four task 'dimensions' is proposed. The perceptual speed and flexibility of closure categories, which are identified with signal discrimination type, comprise the principal dimension in this taxonomy, the others being sense modality, the time course of events, and source complexity. It is also proposed that decision theory provides the most complete method for the analysis of performance in monitoring tasks. Several different aspects of decision theory in relation to monitoring behaviour are described. A method is also outlined whereby both accuracy and latency measures of performance may be analysed within the same decision theory framework. Eight experiments and an organizational study are reported. The results show that a distinction can be made between the perceptual efficiency (sensitivity) of a monitor and his criterial level of response, and that in most monitoring situations, there is no decrement in efficiency over the work period, but an increase in the strictness of the response criterion. The range of tasks exhibiting either or both of these performance trends can be specified within the task classification system. In particular, it is shown that a sensitivity decrement is only obtained for 'speed' tasks with a high stimulation rate. A distinctive feature of 'speed' tasks is that target detection requires the discrimination of a change in a stimulus relative to preceding stimuli, whereas in 'closure' tasks, the information required for the discrimination of targets is presented at the same point In time. In the final study, the specification of tasks yielding sensitivity decrements is shown to be consistent with a task classification analysis of the monitoring literature. It is also demonstrated that the signal type dimension has a major influence on the consistency of individual differences in performance in different tasks. The results provide an empirical validation for the 'speed' and 'closure' categories, and suggest that individual differences are not completely task specific but are dependent on the demands common to different tasks. Task classification is therefore shovn to enable improved generalizations to be made of the factors affecting 1) performance trends over time, and 2) the consistencv of performance in different tasks. A decision theory analysis of response latencies is shown to support the view that criterion shifts are obtained in some tasks, while sensitivity shifts are obtained in others. The results of a psychophysiological study also suggest that evoked potential latency measures may provide temporal correlates of criterion shifts in monitoring tasks. Among other results, the finding that the latencies of negative responses do not increase over time is taken to invalidate arousal-based theories of performance trends over a work period. An interpretation in terms of expectancy, however, provides a more reliable explanation of criterion shifts. Although the mechanisms underlying the sensitivity decrement are not completely clear, the results rule out 'unitary' theories such as observing response and coupling theory. It is suggested that an interpretation in terms of the memory data limitations on information processing provides the most parsimonious explanation of all the results in the literature relating to sensitivity decrement. Task classification therefore enables the refinement and selection of theories of monitoring behaviour in terms of their reliability in generalizing predictions to a wide range of tasks. It is thus concluded that task classification and decision theory provide a reliable basis for the assessment and analysis of monitoring behaviour in different task situations.
Resumo:
This thesis is based upon a case study of the introduction of automated production technologies at the Longbridge plant of British Leyland in the period 1978 to 1980.The investment in automation was part of an overall programme of modernization to manufacture the new 'Mini Metro' model. In the first Section of the thesis, the different theoretical perspectives on technological change are discussed. Particular emphasis is placed upon the social role of management as the primary controllers of technological change. Their actions are seen to be oriented towards the overall strategy of the firm, integrating the firm's competitive strategy with production methods and techniques.This analysis is grounded in an examination of British Leyland's strategies during the 1970s.. The greater part of the thesis deals with the efforts made by management to secure their strategic objectives in the process of technological change against the conflicting claims of their work-force. Examination of these efforts is linked to the development of industrial relations conflict at Longbridge and in British Leyland as a whole.Emphasis is placed upon the struggle between management in pursuit of their version of efficiency and the trade unions in defence of job controls and demarcations. The thesis concludes that the process of technological change in the motor industry is controlled by social forces,with the introduction of new technologies being closely intertwined with management!s political relations with the trade unions.
Resumo:
Speech recognition technology is regarded as a key enabler for increasing the usability of applications deployed on mobile devices -- devices which are becoming increasingly prevalent in modern hospital-based healthcare. Although the use of speech recognition is not new to the hospital-based healthcare domain, its use with mobile devices has thus far been limited. This paper presents the results of a literature review we conducted in order to observe the manner in which speech recognition technology has been used in hospital-based healthcare and to gain an understanding of how this technology is being evaluated, in terms of its dependability and reliability, in healthcare settings. Our intent is that this review will help identify scope for future uses of speech recognition technologies in the healthcare domain, as well as to identify implications for the meaningful evaluation of such technologies given the specific context of use.
Resumo:
Greenhouse cultivation is an energy intensive process therefore it is worthwhile to introduce energy saving measures and alternative energy sources. Here we show that there is scope for energy saving in fan ventilated greenhouses. Measurements of electricity usage as a function of fan speed have been performed for two models of 1.25 m diameter greenhouse fans and compared to theoretical values. Reducing the speed can cut the energy usage per volume of air moved by more than 70%. To minimize the capital cost of low-speed operation, a cooled greenhouse has been built in which the fan speed responds to sunlight such that full speed is reached only around noon. The energy saving is about 40% compared to constant speed operation. Direct operation of fans from solar-photovoltaic modules is also viable as shown from experiments with a fan driven by a brushless DC motor. On comparing the Net Present Value costs of the different systems over a 10 year amortization period (with and without a carbon tax to represent environmental costs) we find that sunlight-controlled system saves money under all assumptions about taxation and discount rates. The solar-powered system, however, is only profitable for very low discount rates, due to the high initial capital costs. Nonetheless this system could be of interest for its reliability in developing countries where mains electricity is intermittent. We recommend that greenhouse fan manufacturers improve the availability of energy-saving designs such as those described here.
Resumo:
Speech recognition technology is regarded as a key enabler for increasing the usability of applications deployed on mobile devices -- devices which are becoming increasingly prevalent in modern hospital-based healthcare. Although the use of speech recognition is not new to the hospital-based healthcare domain, its use with mobile devices has thus far been limited. This paper presents the results of a literature review we conducted in order to observe the manner in which speech recognition technology has been used in hospital-based healthcare and to gain an understanding of how this technology is being evaluated, in terms of its dependability and reliability, in healthcare settings. Our intent is that this review will help identify scope for future uses of speech recognition technologies in the healthcare domain, as well as to identify implications for the meaningful evaluation of such technologies given the specific context of use.
Resumo:
The annealing properties of Type IA Bragg gratings are investigated and compared with Type I and Type IIA Bragg gratings. The transmission properties (mean and modulated wavelength components) of gratings held at predetermined temperatures are recorded from which decay characteristics are inferred. Our data show critical results concerning the high temperature stability of Type IA gratings, as they undergo a drastic initial decay at 100°C, with a consequent mean index change that is severely reduced at this temperature However, the modulated index change of IA gratings remains stable at lower annealing temperatures of 80°C, and the mean index change decays at a comparable rate to Type I gratings at 80°C. Extending this work to include the thermal decay of Type IA gratings inscribed under strain shows that the application of strain quite dramatically transforms the temperature characteristics of the Type IA grating, modifying the temperature coefficient and annealing curves, with the grating showing a remarkable improvement in high temperature stability, leading to a robust grating that can survive temperatures exceeding 180°C. Under conditions of inscription under strain it is found that the temperature coefficient increases, but is maintained at a value considerably different to the Type I grating. Therefore, the combination of Type I and IA (strained) gratings make it possible to decouple temperature and strain over larger temperature excursions.
Resumo:
Cardiac troponin I (cTnI) is one of the most useful serum marker test for the determination of myocardial infarction (MI). The first commercial assay of cTnI was released for medical use in the United States and Europe in 1995. It is useful in determining if the source of chest pains, whose etiology may be unknown, is cardiac related. Cardiac TnI is released into the bloodstream following myocardial necrosis (cardiac cell death) as a result of an infarct (heart attack). In this research project the utility of cardiac troponin I as a potential marker for the determination of time of death is investigated. The approach of this research is not to investigate cTnI degradation in serum/plasma, but to investigate the proteolytic breakdown of this protein in heart tissue postmortem. If our hypothesis is correct, cTnI might show a distinctive temporal degradation profile after death. This temporal profile may have potential as a time of death marker in forensic medicine. The field of time of death markers has lagged behind the great advances in technology since the late 1850's. Today medical examiners are using rudimentary time of death markers that offer limited reliability in the medico-legal arena. Cardiac TnI must be stabilized in order to avoid further degradation by proteases in the extraction process. Chemically derivatized magnetic microparticles were covalently linked to anti-cTnI monoclonal antibodies. A charge capture approach was also used to eliminate the antibody from the magnetic microparticles given the negative charge on the microparticles. The magnetic microparticles were used to extract cTnI from heart tissue homogenate for further bio-analysis. Cardiac TnI was eluted from the beads with a buffer and analyzed. This technique exploits banding pattern on sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) followed by a western blot transfer to polyvinylidene fluoride (PVDF) paper for probing with anti-cTnI monoclonal antibodies. Bovine hearts were used as a model to establish the relationship of time of death and concentration/band-pattern given its homology to human cardiac TnI. The final concept feasibility was tested with human heart samples from cadavers with known time of death. ^
Resumo:
Several factors can increase or decrease military-economic involvement in communist regimes. This anomalous form of military behavior, labeled as the Military Business Complex (MBC), emerged in various communist regimes in the 1980s. However, in early 2000s, the communist governments of China and Vietnam began to decrease the number of military-managed industries, while similar industries increased in Cuba. This paper explains why military industries in Cuba have increased over the last two decades, while they decreased in the Chinese and Vietnamese examples. This question is answered by comparatively testing two hypotheses: the Communist Party and the Bureaucratic-Authoritarian (BA) Hypotheses. The Communist Party hypotheses helps explain how the historical and current structures of Party oversight of the military have been lacking in strength and reliability in Cuba, while they traditionally have been more robust in China and Vietnam. The BA hypotheses helps explain how, due to the lack of a strong civilian institutional oversight, the Cuban military has grown into a bureaucratic entity with many political officers holding autonomous positions of power, an outcome that is not prevalent in the Chinese and Vietnamese examples. Thus, with the establishment of a bureaucratic military government and with the absence of a strong party oversight, the Cuban military has been able to protect its economic endeavors while the Chinese and Vietnamese MBC regimes have contracted.
Resumo:
In the deregulated Power markets it is necessary to have a appropriate Transmission Pricing methodology that also takes into account “Congestion and Reliability”, in order to ensure an economically viable, equitable, and congestion free power transfer capability, with high reliability and security. This thesis presents results of research conducted on the development of a Decision Making Framework (DMF) of concepts and data analytic and modelling methods for the Reliability benefits Reflective Optimal “cost evaluation for the calculation of Transmission Cost” for composite power systems, using probabilistic methods. The methodology within the DMF devised and reported in this thesis, utilises a full AC Newton-Raphson load flow and a Monte-Carlo approach to determine, Reliability Indices which are then used for the proposed Meta-Analytical Probabilistic Approach (MAPA) for the evaluation and calculation of the Reliability benefit Reflective Optimal Transmission Cost (ROTC), of a transmission system. This DMF includes methods for transmission line embedded cost allocation among transmission transactions, accounting for line capacity-use as well as congestion costing that can be used for pricing using application of Power Transfer Distribution Factor (PTDF) as well as Bialek’s method to determine a methodology which consists of a series of methods and procedures as explained in detail in the thesis for the proposed MAPA for ROTC. The MAPA utilises the Bus Data, Generator Data, Line Data, Reliability Data and Customer Damage Function (CDF) Data for the evaluation of Congestion, Transmission and Reliability costing studies using proposed application of PTDF and other established/proven methods which are then compared, analysed and selected according to the area/state requirements and then integrated to develop ROTC. Case studies involving standard 7-Bus, IEEE 30-Bus and 146-Bus Indian utility test systems are conducted and reported throughout in the relevant sections of the dissertation. There are close correlation between results obtained through proposed application of PTDF method with the Bialek’s and different MW-Mile methods. The novel contributions of this research work are: firstly the application of PTDF method developed for determination of Transmission and Congestion costing, which are further compared with other proved methods. The viability of developed method is explained in the methodology, discussion and conclusion chapters. Secondly the development of comprehensive DMF which helps the decision makers to analyse and decide the selection of a costing approaches according to their requirements. As in the DMF all the costing approaches have been integrated to achieve ROTC. Thirdly the composite methodology for calculating ROTC has been formed into suits of algorithms and MATLAB programs for each part of the DMF, which are further described in the methodology section. Finally the dissertation concludes with suggestions for Future work.
Resumo:
This essay examines the themes of paranoia and claustrophobia as elements of horror in John Campbell’s novella “Who Goes There?” (1938) and John Carpenter’s film-adaptation of said novella, called The Thing (1982). The novella and the film utilize the lack of trust and reliability in between the characters as elements of fear as well as supernatural elements in the form of a monster. This essay focuses on the different parts of the story running through both versions, mainly the setting, the characters and the monster, to show how the themes of paranoia and claustrophobia are used throughout these as elements of fear and horror. With the help of Sigmund Freud’s concept of the uncanny, as well as other sources, this essay argues that while the monster plays an important role throughout the story, the threats created by the paranoia and claustrophobia are equal to the monster itself.
Resumo:
During the last decade, wind power generation has seen rapid development. According to the U.S. Department of Energy, achieving 20\% wind power penetration in the U.S. by 2030 will require: (i) enhancement of the transmission infrastructure, (ii) improvement of reliability and operability of wind systems and (iii) increased U.S. manufacturing capacity of wind generation equipment. This research will concentrate on improvement of reliability and operability of wind energy conversion systems (WECSs). The increased penetration of wind energy into the grid imposes new operating conditions on power systems. This change requires development of an adequate reliability framework. This thesis proposes a framework for assessing WECS reliability in the face of external disturbances, e.g., grid faults and internal component faults. The framework is illustrated using a detailed model of type C WECS - doubly fed induction generator with corresponding deterministic and random variables in a simplified grid model. Fault parameters and performance requirements essential to reliability measurements are included in the simulation. The proposed framework allows a quantitative analysis of WECS designs; analysis of WECS control schemes, e.g., fault ride-through mechanisms; discovery of key parameters that influence overall WECS reliability; and computation of WECS reliability with respect to different grid codes/performance requirements.
Resumo:
By providing vehicle-to-vehicle and vehicle-to-infrastructure wireless communications, vehicular ad hoc networks (VANETs), also known as the “networks on wheels”, can greatly enhance traffic safety, traffic efficiency and driving experience for intelligent transportation system (ITS). However, the unique features of VANETs, such as high mobility and uneven distribution of vehicular nodes, impose critical challenges of high efficiency and reliability for the implementation of VANETs. This dissertation is motivated by the great application potentials of VANETs in the design of efficient in-network data processing and dissemination. Considering the significance of message aggregation, data dissemination and data collection, this dissertation research targets at enhancing the traffic safety and traffic efficiency, as well as developing novel commercial applications, based on VANETs, following four aspects: 1) accurate and efficient message aggregation to detect on-road safety relevant events, 2) reliable data dissemination to reliably notify remote vehicles, 3) efficient and reliable spatial data collection from vehicular sensors, and 4) novel promising applications to exploit the commercial potentials of VANETs. Specifically, to enable cooperative detection of safety relevant events on the roads, the structure-less message aggregation (SLMA) scheme is proposed to improve communication efficiency and message accuracy. The scheme of relative position based message dissemination (RPB-MD) is proposed to reliably and efficiently disseminate messages to all intended vehicles in the zone-of-relevance in varying traffic density. Due to numerous vehicular sensor data available based on VANETs, the scheme of compressive sampling based data collection (CS-DC) is proposed to efficiently collect the spatial relevance data in a large scale, especially in the dense traffic. In addition, with novel and efficient solutions proposed for the application specific issues of data dissemination and data collection, several appealing value-added applications for VANETs are developed to exploit the commercial potentials of VANETs, namely general purpose automatic survey (GPAS), VANET-based ambient ad dissemination (VAAD) and VANET based vehicle performance monitoring and analysis (VehicleView). Thus, by improving the efficiency and reliability in in-network data processing and dissemination, including message aggregation, data dissemination and data collection, together with the development of novel promising applications, this dissertation will help push VANETs further to the stage of massive deployment.