958 resultados para High Reliability
Resumo:
Scientific studies regarding specifically references do not seem to exist. However, the utilization of references is an important practice for many companies involved in industrial marketing. The purpose of the study is to increase the understanding about the utilization of references in international industrial marketing in order to contribute to the development of a theory of reference behavior. Specifically, the modes of reference usage in industry, the factors affecting a supplier's reference behavior, and the question how references are actually utilized, are explored in the study. Due to the explorative nature of the study, a research design was followed where theory and empirical studies alternated. An Exploratory Framework was developed to guide a pilot case study that resulted in Framework 1. Results of the pilot study guided an expanded literature review that was used to develop first a Structural Framework and a Process Framework which were combined in Framework 2. Then, the second empirical phase of the case study was conducted in the same (pilot) case company. In this phase, Decision Systems Analysis (DSA) was used as the analysis method. The DSA procedure consists of three interviewing waves: initial interviews, reinterviews, and validating interviews. Four reference decision processes were identified, described and analyzed in the form of flowchart descriptions. The flowchart descriptions were used to explore new constructs and to develop new propositions to develop Framework 2 further. The quality of the study was ascertained by many actions in both empirical parts of the study. The construct validity of the study was ascertained by using multiple sources of evidence and by asking the key informant to review the pilot case report. The DSA method itself includes procedures assuring validity. Because of the choice to conduct a single case study, external validity was not even pursued. High reliability was pursued through detailed documentation and thorough reporting of evidence. It was concluded that the core of the concept of reference is a customer relationship regardless of the concrete forms a reference might take in its utilization. Depending on various contingencies, references might have various tasks inside the four roles of increasing 1) efficiency of sales and sales management, 2) efficiency of the business, 3) effectiveness of marketing activities, and 4) effectiveness in establishing, maintaining and enhancing customer relationships. Thus, references have not only external but internal tasks as well. A supplier's reference behavior might be affected by many hierarchical conditions. Additionally, the empirical study showed that the supplier can utilize its references as a continuous, all pervasive decision making process through various practices. The process includes both individual and unstructured decision making subprocesses. The proposed concept of reference can be used to guide a reference policy recommendable for companies for which the utilization of references is important. The significance of the study is threefold: proposing the concept of reference, developing a framework of a supplier's reference behavior and its short term process of utilizing references, and conceptual structuring of an unstructured and in industrial marketing important phenomenon to four roles.
Resumo:
A steady increase in practical industrial applications has secured a place for linear motors. They provide high dynamics and high positioning accuracy of the motor, high reliability and durability of all components of the system. Machines with linear motors have very big perspectives in modern industry. This thesis enables to understand what a linear motor is, where they are used and what situation there is on their market nowadays. It can help to understand reasonability of applying linear motors on manufacture and benefits of its application.
Resumo:
-
Resumo:
The quantitative component of this study examined the effect of computerassisted instruction (CAI) on science problem-solving performance, as well as the significance of logical reasoning ability to this relationship. I had the dual role of researcher and teacher, as I conducted the study with 84 grade seven students to whom I simultaneously taught science on a rotary-basis. A two-treatment research design using this sample of convenience allowed for a comparison between the problem-solving performance of a CAI treatment group (n = 46) versus a laboratory-based control group (n = 38). Science problem-solving performance was measured by a pretest and posttest that I developed for this study. The validity of these tests was addressed through critical discussions with faculty members, colleagues, as well as through feedback gained in a pilot study. High reliability was revealed between the pretest and the posttest; in this way, students who tended to score high on the pretest also tended to score high on the posttest. Interrater reliability was found to be high for 30 randomly-selected test responses which were scored independently by two raters (i.e., myself and my faculty advisor). Results indicated that the form of computer-assisted instruction (CAI) used in this study did not significantly improve students' problem-solving performance. Logical reasoning ability was measured by an abbreviated version of the Group Assessment of Lx)gical Thinking (GALT). Logical reasoning ability was found to be correlated to problem-solving performance in that, students with high logical reasoning ability tended to do better on the problem-solving tests and vice versa. However, no significant difference was observed in problem-solving improvement, in the laboratory-based instruction group versus the CAI group, for students varying in level of logical reasoning ability.Insignificant trends were noted in results obtained from students of high logical reasoning ability, but require further study. It was acknowledged that conclusions drawn from the quantitative component of this study were limited, as further modifications of the tests were recommended, as well as the use of a larger sample size. The purpose of the qualitative component of the study was to provide a detailed description ofmy thesis research process as a Brock University Master of Education student. My research journal notes served as the data base for open coding analysis. This analysis revealed six main themes which best described my research experience: research interests, practical considerations, research design, research analysis, development of the problem-solving tests, and scoring scheme development. These important areas ofmy thesis research experience were recounted in the form of a personal narrative. It was noted that the research process was a form of problem solving in itself, as I made use of several problem-solving strategies to achieve desired thesis outcomes.
Resumo:
Deux thématiques importantes des technologies de la santé: la pratique médicale fondée sur des preuves probantes et l’évaluation des interventions en médecine sont fondées sur une approche positiviste et une conception mécaniste des organisations en santé. Dans ce mémoire, nous soulevons l’hypothèse selon laquelle les théories de la complexité et la systémique permettent une conceptualisation différente de ces deux aspects de la gouvernance clinique d’une unité de Soins Intensifs Chirurgicaux (SIC), qui est considérée comme un système adaptatif dynamique non linéaire qui nécessite une approche systémique de la cognition. L’étude de cas d’une unité de SIC, permet de démontrer par de nombreux exemples et des analyses de micro-situations, toutes les caractéristiques de la complexité des patients critiques et instables et de la structure organisationnelle des SIC. Après une critique épistémologique de l’Evidence-Based Medicine nous proposons une pratique fondée sur des raisonnements cliniques alliant l’abduction, l’herméneutique et la systémique aux SIC. En nous inspirant des travaux de Karl Weick, nous suggérons aussi de repenser l’évaluation des modes d’interventions cliniques en s’inspirant de la notion d’organisation de haute fiabilité pour mettre en place les conditions nécessaires à l’amélioration des pratiques aux SIC.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
So far, in the bivariate set up, the analysis of lifetime (failure time) data with multiple causes of failure is done by treating each cause of failure separately. with failures from other causes considered as independent censoring. This approach is unrealistic in many situations. For example, in the analysis of mortality data on married couples one would be interested to compare the hazards for the same cause of death as well as to check whether death due to one cause is more important for the partners’ risk of death from other causes. In reliability analysis. one often has systems with more than one component and many systems. subsystems and components have more than one cause of failure. Design of high-reliability systems generally requires that the individual system components have extremely high reliability even after long periods of time. Knowledge of the failure behaviour of a component can lead to savings in its cost of production and maintenance and. in some cases, to the preservation of human life. For the purpose of improving reliability. it is necessary to identify the cause of failure down to the component level. By treating each cause of failure separately with failures from other causes considered as independent censoring, the analysis of lifetime data would be incomplete. Motivated by this. we introduce a new approach for the analysis of bivariate competing risk data using the bivariate vector hazard rate of Johnson and Kotz (1975).
Resumo:
Cancer treatment is most effective when it is detected early and the progress in treatment will be closely related to the ability to reduce the proportion of misses in the cancer detection task. The effectiveness of algorithms for detecting cancers can be greatly increased if these algorithms work synergistically with those for characterizing normal mammograms. This research work combines computerized image analysis techniques and neural networks to separate out some fraction of the normal mammograms with extremely high reliability, based on normal tissue identification and removal. The presence of clustered microcalcifications is one of the most important and sometimes the only sign of cancer on a mammogram. 60% to 70% of non-palpable breast carcinoma demonstrates microcalcifications on mammograms [44], [45], [46].WT based techniques are applied on the remaining mammograms, those are obviously abnormal, to detect possible microcalcifications. The goal of this work is to improve the detection performance and throughput of screening-mammography, thus providing a ‘second opinion ‘ to the radiologists. The state-of- the- art DWT computation algorithms are not suitable for practical applications with memory and delay constraints, as it is not a block transfonn. Hence in this work, the development of a Block DWT (BDWT) computational structure having low processing memory requirement has also been taken up.
Resumo:
The challenge of reducing carbon emission and achieving emission target until 2050, has become a key development strategy of energy distribution for each country. The automotive industries, as the important portion of implementing energy requirements, are making some related researches to meet energy requirements and customer requirements. For modern energy requirements, it should be clean, green and renewable. For customer requirements, it should be economic, reliable and long life time. Regarding increasing requirements on the market and enlarged customer quantity, EVs and PHEV are more and more important for automotive manufactures. Normally for EVs and PHEV there are two important key parts, which are battery package and power electronics composing of critical components. A rechargeable battery is a quite important element for achieving cost competitiveness, which is mainly used to story energy and provide continue energy to drive an electric motor. In order to recharge battery and drive the electric motor, power electronics group is an essential bridge to convert different energy types for both of them. In modern power electronics there are many different topologies such as non-isolated and isolated power converters which can be used to implement for charging battery. One of most used converter topology is multiphase interleaved power converter, pri- marily due to its prominent advantages, which is frequently employed to obtain optimal dynamic response, high effciency and compact converter size. Concerning its usage, many detailed investigations regarding topology, control strategy and devices have been done. In this thesis, the core research is to investigate some branched contents in term of issues analysis and optimization approaches of building magnetic component. This work starts with an introduction of reasons of developing EVs and PEHV and an overview of different possible topologies regarding specific application requirements. Because of less components, high reliability, high effciency and also no special safety requirement, non-isolated multiphase interleaved converter is selected as the basic research topology of founded W-charge project for investigating its advantages and potential branches on using optimized magnetic components. Following, all those proposed aspects and approaches are investigated and analyzed in details in order to verify constrains and advantages through using integrated coupled inductors. Furthermore, digital controller concept and a novel tapped-inductor topology is proposed for multiphase power converter and electric vehicle application.
Resumo:
Durante la crisis financiera global de 2008 muchas organizaciones y mercados financieros tuvieron que terminar sus operaciones o replantearlas debido a los choques que golpearon el bienestar de sus empresas. A pesar de esta grave situación, en la actualidad se pueden encontrar empresas que se recuperaron y salieron del terrible panorama que les presentó la crisis, incluso encontrando nuevas oportunidades de negocio y fortaleciendo su futuro. Esta capacidad que algunas organizaciones tuvieron y que permitió su salida victoriosa de la crisis se denomina resiliencia, la cual es la capacidad de sobreponerse a los efectos negativos de choques internos o externos (Briguglio, Cordina, Farrugia & Vella 2009). Por tanto en el presente trabajo se estudiará esta capacidad tanto en la organización como en los líderes para hallar factores que mejoren el desempeño de las empresas en crisis como la que ocurrió en el 2008 – 2009. Primero se realizará un estudio sobre los sucesos y el desarrollo de la crisis subprime del año 2008 para tener un entendimiento claro de sus antecedentes, desarrollo, magnitud y consecuencias. Posteriormente se realizará un estudio profundo sobre la teoría de la resiliencia organizacional y la resiliencia en el líder como individuo y los estilos de liderazgo. Finalmente teniendo un sustento teórico tanto de la crisis como del concepto de resiliencia se tomarán casos de estudio de empresas que lograron perdurar en la crisis financiera del 2008 y empresas que no lograron sobrevivir para posteriormente hallar características del líder y del liderazgo que puedan aumentar o afectar la capacidad de resiliencia de las organizaciones con el objetivo de brindar herramientas a los líderes actuales para que manejen de forma eficiente y eficaz las empresas en un mundo complejo y variable como el actual.
Resumo:
In the formative research scheme, the author built a psychological scale, with the purpose of analyzing psychological characteristics of the School of Psychology students. The instrument showed a high reliability coefficient (α=0,86), the exploratory factorial analysis found five variables which allowed to predict some aspects of sex and career level through a logistic regression; the ALSCAL showed two emotional dimensions. The author recommends to improve the instrument and to continue its application in the School. It is also necessary to develop qualitative research to acquire new data about the topic of the research.
Resumo:
Este estudio exploró las propiedades psicométricas de la Prueba de Procesamiento Fonológico de Lara, Serra, Aguilar y Flórez (2005) en una muestra de 478 niños de 4 a 7 años de edad en Bogotá y Chía en varios niveles socioeconómicos. Se analizó con pruebas de dificultad, discriminación, matriz de relaciones tetracóricas, coeficiente Alfa de Cronbach y análisis de componentes principales con rotación varimax para validez de constructo. Los resultados mostraron, en conciencia fonológica, un buen rendimiento menos en cuatro ítems, correspondencia del análisis de factores con la división es subescalas y niveles, y alta confiabilidad con bajo nivel de discriminación en la subescala de memoria fonológica. Los resultados se discutieron con base en los componentes de la habilidad de procesamiento fonológico.
Resumo:
Existen importantes pruebas de valoración que miden habilidades o competencias motoras en el niño; a pesar de ello Colombia carece de estudios que demuestren la validez y la confiabilidad de un test de medición que permita emitir un juicio valorativo relacionado con las competencias motoras infantiles, teniendo presente que la intervención debe basarse en la rigurosidad que exigen los procesos de valoración y evaluación del movimiento corporal. Objetivo. El presente estudio se centró en determinar las propiedades psicométricas del test de competencias motoras Bruininiks Oseretsky –BOT 2- segunda edición. Materiales y métodos. Se realizó una evaluación de pruebas diagnósticas con 24 niños aparentemente sanos de ambos géneros, entre 4 y 7 años, residentes en las ciudades de Chía y Bogotá. La evaluación fue realizada por 3 evaluadores expertos; el análisis para consistencia interna se realizó utilizando el Coeficiente Alfa de Cronbach, el análisis de reproducibilidad se estableció a través del Coeficiente de Correlación Intraclase –CCI- y para el análisis de la validez concurrente se utilizó el Coeficiente de Correlación de Pearson, considerando un alfa=0.05. Resultados. Para la totalidad de las pruebas, se encontraron altos índices de confiabilidad y validez. Conclusiones. El BOT 2 es un instrumento válido y confiable, que puede ser utilizado para la evaluación e identificación del nivel de desarrollo en que se encuentran las competencias motoras en el niño.
Resumo:
Viljan att hålla en hög kvalitet på den kod som skrivs vid utveckling av system och applikationerär inte något nytt i utvecklingsvärlden. Flera större företag använder sig av olika mått för attmäta kvaliteten på koden i sina system med målet att hålla en hög driftsäkerhet.Trafikverket är en statlig myndighet som ansvarar för driften av bland annat de system somhåller igång Sveriges järnvägsnät. Eftersom systemen fyller en viktig del i att säkra driften ochse till att tågpositioner, planering av avgångar och hantering av driftstörningar fungerar dygnetrunt för hela landet anser de att det är viktigt att sträva efter att hålla en hög kvalitet påsystemen.Syftet med det här examensarbetet var att ta reda på vilka mått som kan vara möjliga attanvända under systemutvecklingsprocessen för att mäta kvaliteten på kod och hur måtten kananvändas för att öka kvaliteten på IT-lösningar. Detta för att redan på ett tidigt stadie kunnamäta kvaliteten på den kod som skrivs i både befintliga och nyutvecklade system.Studien är en fallstudie som utfördes på Trafikverket, de olika måtten som undersöktes varcode coverage, nivån på maintainability index och antalet inrapporterade incidenter för varjesystem. Mätningar utfördes på sju av Trafikverkets system som i analysen jämfördes motantalet rapporterade incidenter. Intervjuer utfördes för att ge en bild över hur arbetssättet vidutveckling kan påverka kvaliteten. Genom litteraturstudier kom det fram ett mått som inte kundeanvändas praktiskt i det här fallet men är högst intressant, detta är cyclomatic complexity somfinns som en del av maintainability index men som även separat påverkar möjligheten att skrivaenhetstest.Resultaten av studien visar att måtten är användbara för ändamålet men bör inte användassom enskilda mått för att mäta kvalitet eftersom de fyller olika funktioner. Det är viktigt attarbetssättet runt utveckling genomförs enligt en tydlig struktur och att utvecklarna både harkunskap om hur man arbetar med enhetstest och följer kodprinciper för strukturen. Tydligakopplingar mellan nivån på code coverage och inflödet av incidenter kunde ses i de undersöktasystemen där hög code coverage ger ett lägre inflöde av incidenter. Ingen korrelation mellanmaintainability index och incidenter kunde hittas.
Resumo:
This thesis presents the study and development of fault-tolerant techniques for programmable architectures, the well-known Field Programmable Gate Arrays (FPGAs), customizable by SRAM. FPGAs are becoming more valuable for space applications because of the high density, high performance, reduced development cost and re-programmability. In particular, SRAM-based FPGAs are very valuable for remote missions because of the possibility of being reprogrammed by the user as many times as necessary in a very short period. SRAM-based FPGA and micro-controllers represent a wide range of components in space applications, and as a result will be the focus of this work, more specifically the Virtex® family from Xilinx and the architecture of the 8051 micro-controller from Intel. The Triple Modular Redundancy (TMR) with voters is a common high-level technique to protect ASICs against single event upset (SEU) and it can also be applied to FPGAs. The TMR technique was first tested in the Virtex® FPGA architecture by using a small design based on counters. Faults were injected in all sensitive parts of the FPGA and a detailed analysis of the effect of a fault in a TMR design synthesized in the Virtex® platform was performed. Results from fault injection and from a radiation ground test facility showed the efficiency of the TMR for the related case study circuit. Although TMR has showed a high reliability, this technique presents some limitations, such as area overhead, three times more input and output pins and, consequently, a significant increase in power dissipation. Aiming to reduce TMR costs and improve reliability, an innovative high-level technique for designing fault-tolerant systems in SRAM-based FPGAs was developed, without modification in the FPGA architecture. This technique combines time and hardware redundancy to reduce overhead and to ensure reliability. It is based on duplication with comparison and concurrent error detection. The new technique proposed in this work was specifically developed for FPGAs to cope with transient faults in the user combinational and sequential logic, while also reducing pin count, area and power dissipation. The methodology was validated by fault injection experiments in an emulation board. The thesis presents comparison results in fault coverage, area and performance between the discussed techniques.