906 resultados para Process capability analysis


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This communication describes, for the first time, the growth of SnO2 nanoribbons by a controlled carbothermal reduction process. An analysis of the transmission electron microscopy image revealed that these nanoribbons have a well-defined shape, with a typical width in the range of 70-300 nm. In general, the nanostructured ribbons were more than 100 mum in length. The results reported here support the hypothesis that this ribbon-like nanostructured material grows by a vapor-solid process. This study introduces two hypotheses to explain the SnO2 nanoribbon growth process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work considers some aspects of the chemistry involved in the preparation and description of silicon oxide functionalized by sol-gel process. In this work we studied the synthesis and measured the properties of silicon oxide functionalized with 3-chloropropyl, through a sol-gel process. Thermogravimetic analysis, infrared spectra, and elemental analyses were measured. The samples were prepared in the following proportions of tetraethylorthosilicate (TEOS): 3-chloropropyl trimethoxisilane molar ratio: 1:0, 1:1, 2:1, 3:1 and 4:1. The thermogravimetric data for the resulting materials established the 'minimum formulae' 2:0, 3:1, 4.1, 7:1 and 11:1, respectively. As expected, the relative amount of water is inversely proportional to the presence of propyl groups. Infrared data show Si-C and -CH2-vibration modes at 1250 to 1280 and 2920 to 2940 cm(-1), respectively. Thermogravimetric data and infrared spectra showed that inorganic polymers contained organic polymers. (C) 1999 Elsevier B.V. B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The economical viability of three cogeneration schemes as supplying alternatives for a hypothetical industrial process has been studied. A cost appropriation method based on Valero's studies (1986) has been used. This method enables the determination of exergetic flows, the Second Law efficiency of equipment and the monetary costs of the products acquired by the industrial process (steam and electrical energy). The criterion adopted for the selection is the global cost of the supplied products to the industrial process as regarding in Brazilian conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of this study was to analyze the production process and supply control in order to identify possible gaps and develop a method for managing supplies. The relevance of this research is on the benefits that can obtain by identifying the problems of supply control. The research method used was the case study, which was grounded on tripod semi-structured interviews, on-site observation, and document analysis. This methodology was very suitable because it can be analyzed and cross checked. The possibility of implementation of the proposal obtained from the theoretical framework, that together with the complementary actions suggested here, offers the opportunity to make the process more productive and profitable. This work allowed one to observe the weaknesses in managing the supply chain and at what points to work should be improved. It allowed to use some scientific models in the company object of study in order to improve supply management. © 2011 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using oxygen instead of air in a burning process is at present being widely discussed as an option to reduce CO2 emissions. One of the possibilities is to maintain the combustion reaction at the same energy release level as burning with air, which reduces fuel consumption and the emission rates of CO2. A thermal simulation was made for metal reheating furnaces, which operate at a temperature in the range of 1150-1250 degrees C, using natural gas with a 5% excess of oxygen, maintaining fixed values for pressure and combustion temperature. The theoretical results show that it is possible to reduce the consumption of fuel, and this reduction depends on the amount of heat that can be recovered during the air pre-heating process. The analysis was further conducted by considering the 2012 costs of natural gas and oxygen in Brazil. The use of oxygen showed to be economically viable for large furnaces that operate with conventional heat recovering systems (those that provide pre-heated air at temperatures near 400 degrees C). (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The common practice in industry is to perform flutter analyses considering the generalized stiffness and mass matrices obtained from finite element method (FEM) and aerodynamic generalized force matrices obtained from a panel method, as the doublet lattice method. These analyses are often reperformed if significant differences are found in structural frequencies and damping ratios determined from ground vibration tests compared to FEM. This unavoidable rework can result in a lengthy and costly process of analysis during the aircraft development. In this context, this paper presents an approach to perform flutter analysis including uncertainties in natural frequencies and damping ratios. The main goal is to assure the nominal system’s stability considering these modal parameters varying in a limited range. The aeroelastic system is written as an affine parameter model and the robust stability is verified solving a Lyapunov function through linear matrix inequalities and convex optimization

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The considerable search for synergistic agents in cancer research is motivated by the therapeutic benefits achieved by combining anti-cancer agents. Synergistic agents make it possible to reduce dosage while maintaining or enhancing a desired effect. Other favorable outcomes of synergistic agents include reduction in toxicity and minimizing or delaying drug resistance. Dose-response assessment and drug-drug interaction analysis play an important part in the drug discovery process, however analysis are often poorly done. This dissertation is an effort to notably improve dose-response assessment and drug-drug interaction analysis. The most commonly used method in published analysis is the Median-Effect Principle/Combination Index method (Chou and Talalay, 1984). The Median-Effect Principle/Combination Index method leads to inefficiency by ignoring important sources of variation inherent in dose-response data and discarding data points that do not fit the Median-Effect Principle. Previous work has shown that the conventional method yields a high rate of false positives (Boik, Boik, Newman, 2008; Hennessey, Rosner, Bast, Chen, 2010) and, in some cases, low power to detect synergy. There is a great need for improving the current methodology. We developed a Bayesian framework for dose-response modeling and drug-drug interaction analysis. First, we developed a hierarchical meta-regression dose-response model that accounts for various sources of variation and uncertainty and allows one to incorporate knowledge from prior studies into the current analysis, thus offering a more efficient and reliable inference. Second, in the case that parametric dose-response models do not fit the data, we developed a practical and flexible nonparametric regression method for meta-analysis of independently repeated dose-response experiments. Third, and lastly, we developed a method, based on Loewe additivity that allows one to quantitatively assess interaction between two agents combined at a fixed dose ratio. The proposed method makes a comprehensive and honest account of uncertainty within drug interaction assessment. Extensive simulation studies show that the novel methodology improves the screening process of effective/synergistic agents and reduces the incidence of type I error. We consider an ovarian cancer cell line study that investigates the combined effect of DNA methylation inhibitors and histone deacetylation inhibitors in human ovarian cancer cell lines. The hypothesis is that the combination of DNA methylation inhibitors and histone deacetylation inhibitors will enhance antiproliferative activity in human ovarian cancer cell lines compared to treatment with each inhibitor alone. By applying the proposed Bayesian methodology, in vitro synergy was declared for DNA methylation inhibitor, 5-AZA-2'-deoxycytidine combined with one histone deacetylation inhibitor, suberoylanilide hydroxamic acid or trichostatin A in the cell lines HEY and SKOV3. This suggests potential new epigenetic therapies in cell growth inhibition of ovarian cancer cells.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Over the last decade, adverse events and medical errors have become a main focus of interest for the standards of quality and safety in the U.S. healthcare system (Weinstein & Henderson, 2009). Particularly when a medical error occurs, the disclosure of medical errors and its practices have become a focal point of the healthcare process. Patients and family members who have experienced a medical error might be able to provide knowledge and insight on how to improve the disclose process. However, patient and family member are not typically involved in the disclosure process, thus their experiences go unnoticed. ^ The purpose of this research was to explore how best to include patients and family members in the disclosure process regarding a medical error. The research consisted of 28 qualitative interviews from three stakeholder groups: Hospital Administrators, Clinical Service Providers, and Patients and Family Members. They were asked for their ideas and suggestions on how best to include patients and family members in the disclosure process. Framework Analysis was used to analyze this data and find prevalent themes based on the primary research question. A secondary aim was to index categories created based on the interviews that were collected. Data was used from the Texas Disclosure and Compensation Study with Dr. Eric Thomas as the Principal Investigator. Full acknowledgement of access to this data is given to Dr. Thomas. ^ The themes from the research revealed that each stakeholder group was interested and open to including patients and family members in the disclosure process and that the disclosure process should not be a "one-way" avenue. The themes gave many suggestions regarding how to best include patients and family members in the disclosure process of a medical error. Secondary aims revealed several ways to assess the ideas and suggestion given by the stakeholders. Overall, acceptability of getting the perspective of patients and family members was the most common theme. Comparison of each stakeholder group revealed that including patients and family members would be beneficial to improving hospital disclosure practices. ^ Conclusions included a list of recommendations and measureable appropriate strategies that could provide hospital with key stakeholders insights on how to improve their disclosure process. Sharing patients and family members experience with healthcare providers can encourage a shift in culture where patients are valued and active in participating in hospital practices. To my knowledge, this research is the very first of its kind and moves the disclosure process conversation forward in a patient-family member inclusion direction that will assist in improving disclosure practices. Future research should implement and evaluate the success of the various inclusion strategies.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Esta tesis integra un estudio reflexivo sobre la relación de dependencia entre la creación y la memoria a través del análisis de la última obra del escultor Juan Muñoz: Double Bind (Tate Modern, Londres, 2001). Desde esta posición es obligado replantear el análisis de la obra, lo que hace necesario su estudio cubriendo el mayor espectro posible de información accesible más allá de la obra en sí, para aproximarse a la convergencia entre memoria y creación. La perspectiva de análisis propuesta abre camino a nuevas consideraciones so¬bre la relevancia del conocimiento en el desarrollo del proceso creativo. Este análisis no debe tan sólo suponer una aportación al conocimiento del trabajo de Juan Muñoz. Debe también desprenderse de él la innegable participación y necesaria lectura del pasado en el presente. La amnesia de los tiempos pasados impide completar el atlas de imágenes en las que se apoya la creación impidiendo el conocimiento del origen de las fuentes de inspi¬ración y las bases de la creación de una determinada obra. Este hecho limita y distorsiona sus posibles interpretaciones. Pretendo un acercamiento al entendimiento de la forma de mirar y de crear a través del tiempo que es memoria. La memoria tiene un cometido de crucial importancia para la actividad mental y juega un papel fundamental en la conducta y en la creación. La obra es el resultado de la búsqueda de una idea que exprese algo que el creador no puede ex¬presar de otra manera. Es la necesidad de expresar las ideas mediante un lenguaje que se desarrolla en el tiempo y en el espacio, reflejo del ser que responde al pensamiento. Es una forma de experiencia donde subyacen las sendas del pasado y donde se plantea el futuro. Sólo el creador accede a la obra desde dentro, el observador llega a ella desde el exterior y mediante su propia subjetividad. Las obras son formas de experiencia de sus autores, comunicar el mensaje de dicha experiencia supone por tanto interpretar. Persiguiendo la necesidad de saber y entender, pretender explicar el sentido de una cosa implica una apreciación intencionada asociada al entendimiento del intérprete. Las obras son produc¬tos que portan un mensaje y que contienen en su estructura las trazas del tiempo vivido por su creador. Si se quiere adquirir un acercamiento que represente la posición de un autor, será necesario no solo mirar a través de ella, si no introducirse en el contexto de su historia. Mirar hacia atrás, hacia la profundidad del presente para tener conciencia del pensamiento presente y futuro. Recorrer de este modo la instalación Double Bind de Juan Muñoz proporciona una síntesis de sus preocupaciones e intereses a la vez que aporta un conocimiento no necesariamente inmediato, pero relevante y trascendente de la obra, su creador y la historia. ABSTRACT This thesis comprises a reflective study of the dependence relationship between creation and memory through the analysis of the latest work by the sculptor Juan Muñoz: Double Bind (Tate Modern, London, 2001). From this position, it is mandatory to rethink the analysis of the work, making it necessary to cover the widest possible range of information available beyond the work itself, in order to obtain a closer view of the convergence between memory and creation. The proposed analytical approach opens up new considerations on the relevance of knowledge during the development of the creative process. This analysis should not only make a contribution to the knowledge of the work of Juan Muñoz. It should also infer the undeniable involvement and the necessary reading of the past in the present. Amnesia regarding past makes it impossible to complete the atlas of images on which the creation is based, blocking knowledge of the origin of the sources of inspiration and the basis for the creation of a specific work. This fact limits and distorts its possible interpretations. My intention is an approach to how to understand memory as the way of looking and creating over time. Memory has a crucial role to mental activity and plays a key role in behaviour and creation. The work is the result of finding an idea that expresses something that the creator can not express otherwise. It is the need to express ideas by means of a language that develops throughout time and space, a reflection of the being that responds to the thought. It is a way of experience underlying the paths of the past and where the future is set out. Only the creator can access the work from the inside. The observer sees it from the outside and in accordance with his/her own subjectivity. The works form a part of the experience of their authors, thus implying the interpretation of the message of their experience being passed on. The pursuit of knowledge and understanding, and trying to explain the meaning of something implies a deliberate appreciation associated with the understanding of the interpreter. The works are products bearing a message and containing in their structure traces of the time lived by their creator. If one wants to come close to what the author’s posture represents, it will not only be necessary to penetrate it, but also to introduce oneself into the context of its history. Take a look back, towards the depth of the present in order to become aware of present and future thinking. To go across the installation of Double Bind by Juan Muñoz in this way offers a synthesis of his concerns and interests while also providing a not necessarily immediate knowledge, but one which is relevant and important to the work, its creator and history.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract. Speckle is being used as a characterization tool for the analysis of the dynamics of slow-varying phenomena occurring in biological and industrial samples at the surface or near-surface regions. The retrieved data take the form of a sequence of speckle images. These images contain information about the inner dynamics of the biological or physical process taking place in the sample. Principal component analysis (PCA) is able to split the original data set into a collection of classes. These classes are related to processes showing different dynamics. In addition, statistical descriptors of speckle images are used to retrieve information on the characteristics of the sample. These statistical descriptors can be calculated in almost real time and provide a fast monitoring of the sample. On the other hand, PCA requires a longer computation time, but the results contain more information related to spatial–temporal patterns associated to the process under analysis. This contribution merges both descriptions and uses PCA as a preprocessing tool to obtain a collection of filtered images, where statistical descriptors are evaluated on each of them. The method applies to slow-varying biological and industrial processes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper a Hierarchical Analytical Network Process (HANP) model is demonstrated for evaluating alternative technologies for generating electricity from MSW in India. The technological alternatives and evaluation criteria for the HANP study are characterised by reviewing the literature and consulting experts in the field of waste management. Technologies reviewed in the context of India include landfill, anaerobic digestion, incineration, pelletisation and gasification. To investigate the sensitivity of the result, we examine variations in expert opinions and carry out an Analytical Hierarchy Process (AHP) analysis for comparison. We find that anaerobic digestion is the preferred technology for generating electricity from MSW in India. Gasification is indicated as the preferred technology in an AHP model due to the exclusion of criteria dependencies and in an HANP analysis when placing a high priority on net output and retention time. We conclude that HANP successfully provides a structured framework for recommending which technologies to pursue in India, and the adoption of such tools is critical at a time when key investments in infrastructure are being made. Therefore the presented methodology is thought to have a wider potential for investors, policy makers, researchers and plant developers in India and elsewhere. © 2013 Elsevier Ltd. All rights reserved.