953 resultados para Reverse engineering processes
Resumo:
Measurement while drilling (MWD) techniques can provide a useful tool to aid drill and blast engineers in open cut mining. By avoiding time consuming tasks such as scan-lines and rock sample collection for laboratory tests, MWD techniques can not only save time but also improve the reliability of the blast design by providing the drill and blast engineer with the information specially tailored for use. While most mines use a standard blast pattern and charge per blasthole, based on a single rock factor for the entire bench or blast region, information derived from the MWD parameters can improve the blast design by providing more accurate rock properties for each individual blasthole. From this, decisions can be made on the most appropriate type and amount of explosive charge to place in a per blasthole or to optimise the inter-hole timing detonation time of different decks and blastholes. Where real-time calculations are feasible, the system could extend the present blast design even be used to determine the placement of subsequent holes towards a more appropriate blasthole pattern design like asymmetrical blasting.
Resumo:
Blasting has been the most frequently used method for rock breakage since black powder was first used to fragment rocks, more than two hundred years ago. This paper is an attempt to reassess standard design techniques used in blasting by providing an alternative approach to blast design. The new approach has been termed asymmetric blasting. Based on providing real time rock recognition through the capacity of measurement while drilling (MWD) techniques, asymmetric blasting is an approach to deal with rock properties as they occur in nature, i.e., randomly and asymmetrically spatially distributed. It is well accepted that performance of basic mining operations, such as excavation and crushing rely on a broken rock mass which has been pre conditioned by the blast. By pre-conditioned we mean well fragmented, sufficiently loose and with adequate muckpile profile. These muckpile characteristics affect loading and hauling [1]. The influence of blasting does not end there. Under the Mine to Mill paradigm, blasting has a significant leverage on downstream operations such as crushing and milling. There is a body of evidence that blasting affects mineral liberation [2]. Thus, the importance of blasting has increased from simply fragmenting and loosing the rock mass, to a broader role that encompasses many aspects of mining, which affects the cost of the end product. A new approach is proposed in this paper which facilitates this trend 'to treat non-homogeneous media (rock mass) in a non-homogeneous manner (an asymmetrical pattern) in order to achieve an optimal result (in terms of muckpile size distribution).' It is postulated there are no logical reasons (besides the current lack of means to infer rock mass properties in the blind zones of the bench and onsite precedents) for drilling a regular blast pattern over a rock mass that is inherently heterogeneous. Real and theoretical examples of such a method are presented.
Resumo:
Blast fragmentation can have a significant impact on the profitability of a mine. An optimum run of mine (ROM) size distribution is required to maximise the performance of downstream processes. If this fragmentation size distribution can be modelled and controlled, the operation will have made a significant advancement towards improving its performance. Blast fragmentation modelling is an important step in Mine to Mill™ optimisation. It allows the estimation of blast fragmentation distributions for a number of different rock mass, blast geometry, and explosive parameters. These distributions can then be modelled in downstream mining and milling processes to determine the optimum blast design. When a blast hole is detonated rock breakage occurs in two different stress regions - compressive and tensile. In the-first region, compressive stress waves form a 'crushed zone' directly adjacent to the blast hole. The second region, termed the 'cracked zone', occurs outside the crush one. The widely used Kuz-Ram model does not recognise these two blast regions. In the Kuz-Ram model the mean fragment size from the blast is approximated and is then used to estimate the remaining size distribution. Experience has shown that this model predicts the coarse end reasonably accurately, but it can significantly underestimate the amount of fines generated. As part of the Australian Mineral Industries Research Association (AMIRA) P483A Mine to Mill™ project, the Two-Component Model (TCM) and Crush Zone Model (CZM), developed by the Julius Kruttschnitt Mineral Research Centre (JKMRC), were compared and evaluated to measured ROM fragmentation distributions. An important criteria for this comparison was the variation of model results from measured ROM in the-fine to intermediate section (1-100 mm) of the fragmentation curve. This region of the distribution is important for Mine to Mill™ optimisation. The comparison of modelled and Split ROM fragmentation distributions has been conducted in harder ores (UCS greater than 80 MPa). Further work involves modelling softer ores. The comparisons will be continued with future site surveys to increase confidence in the comparison of the CZM and TCM to Split results. Stochastic fragmentation modelling will then be conducted to take into account variation of input parameters. A window of possible fragmentation distributions can be compared to those obtained by Split . Following this work, an improved fragmentation model will be developed in response to these findings.
Resumo:
Background: In the presence of dNTPs, intact HIV-1 virions are capable of reverse transcribing at least part of their genome, a process known as natural endogenous reverse transcription (NERT). PCR analysis of virion DNA produced by NERT revealed that the first strand transfer reaction (1stST) was inefficient in intact virions, with minus strand (-) strong stop DNA (ssDNA) copy numbers up to 200 times higher than post-1stST products measured using primers in U3 and U5. This was in marked contrast to the efficiency of 1stST observed in single-round cell infection assays, in which (-) ssDNA and U3-U5 copy numbers were indistinguishable. Objectives: To investigate the reasons for the discrepancy in first strand transfer efficiency between intact cell-free virus and the infection process. Study design: Alterations of both NERT reactions and the conditions of cell infection were used to test whether uncoating and/or entry play a role in the discrepancy in first strand transfer efficiency. Results and Conclusions: The difference in 1stST efficiency could not be attributed simply to viral uncoating, since addition of very low concentrations of detergent to NERT reactions removed the viral envelope without disrupting the reverse transcription complex, and these conditions resulted in no improvement in 1stST efficiency. Virus pseudotyped with surface glycoproteins from either vesicular stomatitis virus or amphotrophic murine leukaemia virus also showed low levels of 1stST in low detergent NERT assays and equivalent levels of (-) ssDNA and 1stST in single-round infections of cells, demonstrating that the gp120-mediated infection process did not select for virions capable of carrying out 1stST. These data indicate that a post-entry event or factor may be involved in efficient HIV-1 reverse transcription in vivo. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
A growing demand for efficient air quality management calls for the development of technologies capable of meeting the stringent requirements now being applied in areas of chemical, biological and medical activities. Currently, filtration is the most effective process available for removal of fine particles from carrier gases. Purification of gaseous pollutants is associated with adsorption, absorption and incineration. In this paper we discuss a new technique for highly efficient simultaneous purification of gaseous and particulate pollutants from carrier gases, and investigate the utilization of Nuclear Magnetic Resonance (NMR) imaging for the study of the dynamic processes associated with gas-liquid flow in porous media. Our technique involves the passage of contaminated carrier gases through a porous medium submerged into a liquid, leading to the formation of narrow and tortuous pathways through the medium. The wet walls of these pathways result in outstanding purification of gaseous, liquid and solid alien additives. NMR imaging was successfully used to map the gas pathways inside the porous medium submerged into the liquid layer. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
A research program on atmospheric boundary layer processes and local wind regimes in complex terrain was conducted in the vicinity of Lake Tekapo in the southern Alps of New Zealand, during two 1-month field campaigns in 1997 and 1999. The effects of the interaction of thermal and dynamic forcing were of specific interest, with a particular focus on the interaction of thermal forcing of differing scales. The rationale and objectives of the field and modeling program are described, along with the methodology used to achieve them. Specific research aims include improved knowledge of the role of surface forcing associated with varying energy balances across heterogeneous terrain, thermal influences on boundary layer and local wind development, and dynamic influences of the terrain through channeling effects. Data were collected using a network of surface meteorological and energy balance stations, radiosonde and pilot balloon soundings, tethered balloon and kite-based systems, sodar, and an instrumented light aircraft. These data are being used to investigate the energetics of surface heat fluxes, the effects of localized heating/cooling and advective processes on atmospheric boundary layer development, and dynamic channeling. A complementary program of numerical modeling includes application of the Regional Atmospheric Modeling System (RAMS) to case studies characterizing typical boundary layer structures and airflow patterns observed around Lake Tekapo. Some initial results derived from the special observation periods are used to illustrate progress made to date. In spite of the difficulties involved in obtaining good data and undertaking modeling experiments in such complex terrain, initial results show that surface thermal heterogeneity has a significant influence on local atmospheric structure and wind fields in the vicinity of the lake. This influence occurs particularly in the morning. However, dynamic channeling effects and the larger-scale thermal effect of the mountain region frequently override these more local features later in the day.
Resumo:
This paper presents a comprehensive study of sludge floc characteristics and their impact on compressibility and settleability of activated sludge in full scale wastewater treatment processes. The sludge flocs were characterised by morphological (floc size distribution, fractal dimension, filament index), physical (flocculating ability, viscosity, hydrophobicity and surface charge) and chemical (polymeric constituents and metal content) parameters. Compressibility and settleability were defined in terms of the sludge volume index (SVI) and zone settling velocity (ZSV). The floc morphological and physical properties had important influence on the sludge compressibility and settleability. Sludges containing large flocs and high quantities of filaments, corresponding to lower values of fractal dimension (D-f), demonstrated poor compressibility and settleability. Sludge flocs with high flocculating ability had lower SVI and higher ZSV, whereas high values of hydrophobicity, negative surface charge and viscosity of the sludge flocs correlated to high SVI and low ZSV. The quantity of the polymeric compounds protein. humic substances and carbohydrate in the sludge and the extracted extracellular polymeric substances (EPS) had significant positive correlations with SVI. The ZSV was quantitatively independent of the polymeric constituents. High concentrations of the extracted EPS were related to poor compressibility and settleability. The cationic ions Ca, Mg, Al and Fe in the sludge improved significantly the sludge compressibility and settleability. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
The influence of near-bed sorting processes on heavy mineral content in suspension is discussed. Sediment concentrations above a rippled bed of mixed quartz and heavy mineral sand were measured under regular nonbreaking waves in the laboratory. Using the traditional gradient diffusion process, settling velocity would be expected to strongly affect sediment distribution. This was not observed during present trials. In fact, the vertical gradients of time-averaged suspension concentrations were found to be similar for the light and heavy minerals, despite their different settling velocities. This behavior implies a convective rather than diffusive distribution mechanism. Between the nonmoving bed and the lowest suspension sampling point, fight and heavy mineral concentration differs by two orders of magnitude. This discrimination against the heavy minerals in the pickup process is due largely to selective entrainment at the ripple face. Bed-form dynamics and the nature of quartz suspension profiles are found to be little affected by the trialed proportion of overall heavy minerals in the bed (3.8-22.1%).
Resumo:
A new wavelet-based adaptive framework for solving population balance equations (PBEs) is proposed in this work. The technique is general, powerful and efficient without the need for prior assumptions about the characteristics of the processes. Because there are steeply varying number densities across a size range, a new strategy is developed to select the optimal order of resolution and the collocation points based on an interpolating wavelet transform (IWT). The proposed technique has been tested for size-independent agglomeration, agglomeration with a linear summation kernel and agglomeration with a nonlinear kernel. In all cases, the predicted and analytical particle size distributions (PSDs) are in excellent agreement. Further work on the solution of the general population balance equations with nucleation, growth and agglomeration and the solution of steady-state population balance equations will be presented in this framework. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The effect of pore-network connectivity on binary liquid-phase adsorption equilibria using the ideal adsorbed solution theory (LAST) was studied. The liquid-phase binary adsorption experiments used ethyl propionate, ethyl butyrate, and ethyl isovalerate as the adsorbates and commercial activated carbons Filtrasorb-400 and Norit ROW 0.8 as adsorbents. As the single-component isotherm, a modified Dubinin-Radushkevich equation was used. A comparison with experimental data shows that incorporating the connectivity of the pore network and considering percolation processes associated with different molecular sizes of the adsorptives in the mixture, as well as their different corresponding accessibility, can improve the prediction of binary adsorption equilibria using the LAST Selectivity of adsorption for the larger molecule in binary systems increases with an increase in the pore-network coordination number, as well with an increase in the mean pore width and in the spread of the pore-size distribution.
Resumo:
A Baía de Vitória é um estuário com 20 km de comprimento, morfologicamente estreito, com um regime de micromaré e, como outros estuários modernos, formado durante a última transgressão pós-glacial. A morfologia de fundo do estrato estuarino é caracterizada por um canal natural principal limitado por planícies de maré com manguezais desenvolvidos. Datações de radiocarbono originais foram obtidas para a área. Cinco idades de radiocarbono estendendo-se de 1.010 a 7.240 anos AP foram obtidas através de dois testemunhos de sedimento, representando uma sequência estratigráfica de 5 m de espessura. Os resultados indicam que até aproximadamente 4.000 anos cal. AP, as condições ambientais da Baía de Vitória eram ainda de uma baía aberta, com uma conexão livre e aberta com águas marinhas. Durante os últimos 4.000 anos a baía experimentou uma fase de regressão importante, tornando-se mais restrita em termos de circulação da água do mar e provavelmente aumentando a energia de marés. Três superfícies estratigráficas principais foram reconhecidas, limitando fácies transgressiva, transgressiva/nível de mar alto e regressiva. A morfologia do canal atual representa um diastema de maré, mostrando fácies regressivas truncadas e erodidas. Biofácies de foraminíferos, passando de ambiente marinho para ambiente salobro e de manguezais em planície de maré confirmam a interpretação sismoestratigráfica. A ausência de biofácies de mangue em um dos dois testemunhos é tambémuma indicação de ravinamento de maré atual.
Resumo:
A área da engenharia responsável pelo dimensionamento de estruturas vive em busca da solução que melhor atenderá a vários parâmetros simultâneos como estética, custo, qualidade, peso entre outros. Na prática, não se pode afirmar que o melhor projeto foi de fato executado, pois os projetos são feitos principalmente baseados na experiência do executor, sem se esgotar todas as hipóteses possíveis. É neste sentido que os processos de otimização se fazem necessários na área de dimensionamento de estruturas. É possível obter a partir de um objetivo dado, como o custo, o dimensionamento que melhor atenderá a este parâmetro. Existem alguns estudos nesta área, porém ainda é necessário mais pesquisas. Uma área que vem avançando no estudo de otimização estrutural é o dimensionamento de pilares de acordo com a ABNT NBR 6118:2014 que atenda a uma gama maior de geometrias possíveis. Deve-se também estudar o melhor método de otimização para este tipo de problema dentro dos vários existentes na atualidade. Assim o presente trabalho contempla o embasamento conceitual nos temas de dimensionamento de pilares e métodos de otimização na revisão bibliográfica indicando as referências e métodos utilizados no software de dimensionamento otimizado de pilares, programado com auxílio do software MathLab e seus pacotes, utilizando métodos determinísticos de otimização. Esta pesquisa foi realizada para obtenção do Título de Mestre em Engenharia Civil na Universidade Federal do Espírito Santo.