917 resultados para reliability test system
Resumo:
The fracture toughness and interfacial adhesion properties of a coating on its substrate are considered to be crucial intrinsic parameters determining performance and reliability of coating-substrate system. In this work, the fracture toughness and interfacial shear strength of a hard and brittle Cr coating on a normal medium carbon steel substrate were investigated by means of a tensile test. The normal medium carbon steel substrate electroplated with a hard and brittle Cr coating was quasi-statically stretched to induce an array of parallel cracks in the coating. An optical microscope was used to observe the cracking of the coating and the interfacial decohesion between the coating and the substrate during the loading. It was found that the cracking of the coating initiated at critical strain, and then the number of the cracks of the coating per unit axial distance increased with the increase in the tensile strain. At another critical strain, the number of the cracks of the coating became saturated, i.e. the number of cracks per unit axial distance became a constant after this critical strain. Based on the experiment result, the fracture toughness of the brittle coating can be determined using a mechanical model. Interestingly, even when the whole specimen fractured completely under an extreme strain of the substrate, the interfacial decohesion or buckling of the coating on its substrate was completely absent. The test result is different from that appeared in the literature though the identical test method and the brittle coating/ductile metal substrate system are taken. It was found that this difference can be attributed to an important mechanism that the Cr coating on the steel substrate has a good adhesion, and the ultimate interfacial shear strength between the Cr coating and the steel substrate has exceeded the maximum shear flow strength level of the steel substrate. This result also indicates that the maximum shear flow strength level of the ductile steel substrate can be only taken as a lower bound estimate on the ultimate shear strength of the interface. This estimation of the ultimate interfacial shear strength is consistent with the theoretical analysis and prediction presented in the literature.
Resumo:
Peel test measurements have been performed to estimate both the interface toughness and the separation strength between copper thin film and Al2O3 substrate with film thicknesses ranging between 1 and 15 mu m. An inverse analysis based on the artificial neural network method is adopted to determine the interface parameters. The interface parameters are characterized by the cohesive zone (CZ) model. The results of finite element simulations based on the strain gradient plasticity theory are used to train the artificial neural network. Using both the trained neural network and the experimental measurements for one test result, both the interface toughness and the separation strength are determined. Finally, the finite element predictions adopting the determined interface parameters are performed for the other film thickness cases, and are in agreement with the experimental results.
Resumo:
MELECON 2012 - 2012 16th IEEE Mediterranean Electrotechnical Conference, 25 Mar - 28 Mar 2012, Túnez
Resumo:
An experimental investigation will be performed on the thermocapillary motion of two bubbles in Chinese return-satellite. The experiment will study the migration process of bubble caused by thermocapillary effect in microgravity environment, and their interaction between two bubbles. The bubble is driven by the thermocapillary stress on the surface on account on the variation of the surface tension with temperature. The interaction between two bubbles becomes significant as the separation distance between them is reduced drastically so that the bubble interaction has to be considered. Recently, the problem has been discussed on the method of successive reflections, and accurate migration velocities of two arbitrarily oriented bubbles were derived for the limit of small Marangoni and Reynolds numbers. Numerical results for the migration of the two bubbles show that the interaction between two bubbles has significant influence on their thermocapillary migration velocities with a bubble approaching another. However, there is a lack of experimental validate for the theoretic results. Now the experimental facility is designed for experimenting time after time. A cone-shaped top cover is used to expel bubble from the cell after experiment. But, the cone-shaped top cover can cause temperature uniformity on horizontal plane in whole cell. Therefore, a metal board with multi-holes is fixed under the top cover. The board is able to let the temperature distribution on the board uniform because of their high heat conductivity, and the bubble can pass through it. In the system two bubbles are injected into the test cell respectively by two sets of cylinder. And the bubbles sizes are controlled by two sets of step-by-step motor. It is very important problem that bubble can be divorced from the injecting mouth in microgravity environment. Thus, other two sets of device for injecting mother liquid were used to push bubble. The working principle of injecting mother liquid is to utilize pressure difference directly between test cell and reservoir
Resumo:
Digital Speckle Correlation Method (DSCM) is a useful tool for whole field deformation measurement, and has been applied to analyze the deformation field of rock materials in recent years. In this paper, a Geo-DSCM system is designed and used to analyse the more complicated problems of rock mechanics, such as damage evolution and failure procedure. A weighted correlation equation is proposed to improve the accuracy of displacement measurement on a heterogeneous deformation field. In addition, a data acquisition system is described that can synchronize with the test machine and can capture speckle image at various speeds during experiment. For verification of the Geo-DSCM system, the failure procedure of a borehole rock structure is inspected and the evolution of the deformation localization is analysed. It is shown that the deformation localization generally initializes at the vulnerable area of the rock structure but may develop in a very complicated way.
Resumo:
For sign languages used by deaf communities, linguistic corpora have until recently been unavailable, due to the lack of a writing system and a written culture in these communities, and the very recent advent of digital video. Recent improvements in video and computer technology have now made larger sign language datasets possible; however, large sign language datasets that are fully machine-readable are still elusive. This is due to two challenges. 1. Inconsistencies that arise when signs are annotated by means of spoken/written language. 2. The fact that many parts of signed interaction are not necessarily fully composed of lexical signs (equivalent of words), instead consisting of constructions that are less conventionalised. As sign language corpus building progresses, the potential for some standards in annotation is beginning to emerge. But before this project, there were no attempts to standardise these practices across corpora, which is required to be able to compare data crosslinguistically. This project thus had the following aims: 1. To develop annotation standards for glosses (lexical/word level) 2. To test their reliability and validity 3. To improve current software tools that facilitate a reliable workflow Overall the project aimed not only to set a standard for the whole field of sign language studies throughout the world but also to make significant advances toward two of the world’s largest machine-readable datasets for sign languages – specifically the BSL Corpus (British Sign Language, http://bslcorpusproject.org) and the Corpus NGT (Sign Language of the Netherlands, http://www.ru.nl/corpusngt).
Resumo:
179 p.
Resumo:
Micro-fabrication technology has substantial potential for identifying molecular markers expressed on the surfaces of tissue cells and viruses. It has been found in several conceptual prototypes that cells with such markers are able to be captured by their antibodies immobilized on microchannel substrates and unbound cells are flushed out by a driven flow. The feasibility and reliability of such a microfluidic-based assay, however, remains to be further tested. In the current work, we developed a microfluidic-based system consisting of a microfluidic chip, an image grabbing unit, data acquisition and analysis software, as well as a supporting base. Specific binding of CD59-expressed or BSA-coupled human red blood cells (RBCs) to anti-CD59 or anti-BSA antibody-immobilized chip surfaces was quantified by capture efficiency and by the fraction of bound cells. Impacts of respective flow rate, cell concentration, antibody concentration and site density were tested systematically. The measured data indicated that the assay was robust. The robustness was further confirmed by capture efficiencies measured from an independent ELISA-based cell binding assay. These results demonstrated that the system developed provided a new platform to effectively quantify cellular surface markers effectively, which promoted the potential applications in both biological studies and clinical diagnoses.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
This is a two-part thesis concerning the motion of a test particle in a bath. In part one we use an expansion of the operator PLeit(1-P)LLP to shape the Zwanzig equation into a generalized Fokker-Planck equation which involves a diffusion tensor depending on the test particle's momentum and the time.
In part two the resultant equation is studied in some detail for the case of test particle motion in a weakly coupled Lorentz Gas. The diffusion tensor for this system is considered. Some of its properties are calculated; it is computed explicitly for the case of a Gaussian potential of interaction.
The equation for the test particle distribution function can be put into the form of an inhomogeneous Schroedinger equation. The term corresponding to the potential energy in the Schroedinger equation is considered. Its structure is studied, and some of its simplest features are used to find the Green's function in the limiting situations of low density and long time.
Resumo:
Successful management has been defined as the art of spending money wisely and well. Profits may not be the end and all of business but they are certainly the test of practicality. Everything worth while should pay for itself. One proposal is no better than another, except as in the working-out it yields better results.
Resumo:
We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.
This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.
Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.
Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.
Resumo:
In the present paper, we propose a novel method for measuring the even aberrations of lithographic projection optics by use of optimized phase-shifting marks on the test mask. The line/space ratio of the phase-shifting marks is optimized to obtain the maximum sensitivities of Zernike coefficients corresponding to even aberrations. Spherical aberration and astigmatism can be calculated from the focus shifts of phase-shifting gratings oriented at 0 degrees, 45 degrees, 90 degrees and 135 degrees at multiple illumination settings. The PROLITH simulation results show that, the measurement accuracy of spherical aberration and astigmatism obviously increase, after the optimization of the measurement mark. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
O potencial eólico do Brasil, de vento firme e com viabilidade econômica de aproveitamento, é de 143 GW. Isso equivale ao dobro de toda a capacidade da geração já instalada no país. No Brasil, a energia eólica tem uma sazonalidade complementar à energia hidrelétrica, porque os períodos de melhor condição de vento coincidem com os de menor capacidade dos reservatórios. O projeto desenvolvido neste trabalho nasceu de uma chamada pública do FINEP, e sob os auspícios do recém criado CEPER. Ao projeto foi incorporado um caráter investigativo, de contribuição científica original, resultando em um produto de tecnologia inovadora para aerogeradores de baixa potência. Dentre os objetivos do projeto, destacamos a avaliação experimental de turbinas eólicas de 5000 W de potência. Mais especificamente, dentro do objetivo geral deste projeto estão incluídas análise estrutural, análise aerodinâmica e análise de viabilidade de novos materiais a serem empregados. Para cada uma das diferentes áreas de conhecimento que compõem o projeto, será adotada a metodologia mais adequada. Para a Análise aerodinâmica foi realizada uma simulação numérica preliminar seguida de ensaios experimentais em túnel de vento. A descrição dos procedimentos adotados é apresentada no Capítulo 3. O Capítulo 4 é dedicado aos testes elétricos. Nesta etapa, foi desenvolvido um banco de testes para obtenção das características específicas das máquinas-base, como curvas de potência, rendimento elétrico, análise e perdas mecânicas e elétricas, e aquecimento. Este capítulo termina com a análise crítica dos valores obtidos. Foram realizados testes de campo de todo o conjunto montado. Atualmente, o aerogerador de 5kW encontra-se em operação, instrumentado e equipado com sistema de aquisição de dados para consolidação dos testes de confiabilidade. Os testes de campo estão ocorrendo na cidade de Campos, RJ, e abrangeram as seguintes dimensões de análise; testes de eficiência para determinação da curva de potência, níveis de ruído e atuação de dispositivos de segurança. Os resultados esperados pelo projeto foram atingidos, consolidando o projeto de um aerogerador de 5000W.
Resumo:
The parameters a and b of the length-weight relationship of the form W = a . L super(b) were estimated for 57 fish species sampled in Sao Sebastiao Channel and shelf system in 1997, Sao Paulo, Brazil. The b values ranged from 2.746 to 3.617. The Student's t-test revealed that mot (44) species had b values significantly different from 3. A normal distribution of the calculated LWR exponents (b) was obtained.