977 resultados para 3D models
Resumo:
Predictions of flow patterns in a 600-mm scale model SAG mill made using four classes of discrete element method (DEM) models are compared to experimental photographs. The accuracy of the various models is assessed using quantitative data on shoulder, toe and vortex center positions taken from ensembles of both experimental and simulation results. These detailed comparisons reveal the strengths and weaknesses of the various models for simulating mills and allow the effect of different modelling assumptions to be quantitatively evaluated. In particular, very close agreement is demonstrated between the full 3D model (including the end wall effects) and the experiments. It is also demonstrated that the traditional two-dimensional circular particle DEM model under-predicts the shoulder, toe and vortex center positions and the power draw by around 10 degrees. The effect of particle shape and the dimensionality of the model are also assessed, with particle shape predominantly affecting the shoulder position while the dimensionality of the model affects mainly the toe position. Crown Copyright (C) 2003 Published by Elsevier Science B.V. All rights reserved.
Resumo:
Blast fragmentation can have a significant impact on the profitability of a mine. An optimum run of mine (ROM) size distribution is required to maximise the performance of downstream processes. If this fragmentation size distribution can be modelled and controlled, the operation will have made a significant advancement towards improving its performance. Blast fragmentation modelling is an important step in Mine to Mill™ optimisation. It allows the estimation of blast fragmentation distributions for a number of different rock mass, blast geometry, and explosive parameters. These distributions can then be modelled in downstream mining and milling processes to determine the optimum blast design. When a blast hole is detonated rock breakage occurs in two different stress regions - compressive and tensile. In the-first region, compressive stress waves form a 'crushed zone' directly adjacent to the blast hole. The second region, termed the 'cracked zone', occurs outside the crush one. The widely used Kuz-Ram model does not recognise these two blast regions. In the Kuz-Ram model the mean fragment size from the blast is approximated and is then used to estimate the remaining size distribution. Experience has shown that this model predicts the coarse end reasonably accurately, but it can significantly underestimate the amount of fines generated. As part of the Australian Mineral Industries Research Association (AMIRA) P483A Mine to Mill™ project, the Two-Component Model (TCM) and Crush Zone Model (CZM), developed by the Julius Kruttschnitt Mineral Research Centre (JKMRC), were compared and evaluated to measured ROM fragmentation distributions. An important criteria for this comparison was the variation of model results from measured ROM in the-fine to intermediate section (1-100 mm) of the fragmentation curve. This region of the distribution is important for Mine to Mill™ optimisation. The comparison of modelled and Split ROM fragmentation distributions has been conducted in harder ores (UCS greater than 80 MPa). Further work involves modelling softer ores. The comparisons will be continued with future site surveys to increase confidence in the comparison of the CZM and TCM to Split results. Stochastic fragmentation modelling will then be conducted to take into account variation of input parameters. A window of possible fragmentation distributions can be compared to those obtained by Split . Following this work, an improved fragmentation model will be developed in response to these findings.
Resumo:
A new algebraic Bethe ansatz scheme is proposed to diagonalize classes of integrable models relevant to the description of Bose-Einstein condensation in dilute alkali gases. This is achieved by introducing the notion of Z-graded representations of the Yang-Baxter algebra. (C) 2003 American Institute of Physics.
Resumo:
Despite the strong influence of plant architecture on crop yield, most crop models either ignore it or deal with it in a very rudimentary way. This paper demonstrates the feasibility of linking a model that simulates the morphogenesis and resultant architecture of individual cotton plants with a crop model that simulates the effects of environmental factors on critical physiological processes and resulting yield in cotton. First the varietal parameters of the models were made concordant. Then routines were developed to allocate the flower buds produced each day by the crop model amongst the potential positions generated by the architectural model. This allocation is done according to a set of heuristic rules. The final weight of individual bolls and the shedding of buds and fruit caused by water, N, and C stresses are processed in a similar manner. Observations of the positions of harvestable fruits, both within and between plants, made under a variety of agronomic conditions that had resulted in a broad range of plant architectures were compared to those predicted by the model with the same environmental inputs. As illustrated by comparisons of plant maps, the linked models performed reasonably well, though performance of the fruiting point allocation and shedding algorithms could probably be improved by further analysis of the spatial relationships of retained fruit. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
We present two integrable spin ladder models which possess a general free parameter besides the rung coupling J. The models are exactly solvable by means of the Bethe ansatz method and we present the Bethe ansatz equations. We analyze the elementary excitations of the models which reveal the existence of a gap for both models that depends on the free parameter. (C) 2003 American Institute of Physics.
Resumo:
For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
This paper aims to describe the processes of teaching illustration and animation, together, in the context of a masters degree program. In Portugal, until very recently, illustration and animation higher education courses, were very scarce and only provided by a few private universities, which offered separated programs - either illustration or animation. The MA in Illustration and Animation (MIA) based in the Instituto Politécnico do Cávado e Ave in Portugal, dared to join these two creative areas in a common learning model and is already starting it’s third edition with encouraging results and will be supported by the first international conference on illustration and animation (CONFIA). This masters program integrates several approaches and techniques (in illustration and animation) and integrates and encourages creative writing and critique writing. This paper describes the iterative process of construction, and implementation of the program as well as the results obtained on the initial years of existence in terms of pedagogic and learning conclusions. In summary, we aim to compare pedagogic models of animation or illustration teaching in higher education opposed to a more contemporary and multidisciplinary model approach that integrates the two - on an earlier stage - and allows them to be developed separately – on the second part of the program. This is based on the differences and specificities of animation (from classic techniques to 3D) and illustration (drawing the illustration) and the intersection area of these two subjects within the program structure focused on the students learning and competencies acquired to use in professional or authorial projects.
Resumo:
This paper examines the performance of Portuguese equity funds investing in the domestic and in the European Union market, using several unconditional and conditional multi-factor models. In terms of overall performance, we find that National funds are neutral performers, while European Union funds under-perform the market significantly. These results do not seem to be a consequence of management fees. Overall, our findings are supportive of the robustness of conditional multi-factor models. In fact, Portuguese equity funds seem to be relatively more exposed to smallcaps and more value-oriented. Also, they present strong evidence of time-varying betas and, in the case of the European Union funds, of time-varying alphas too. Finally, in terms of market timing, our tests suggest that mutual fund managers in our sample do not exhibit any market timing abilities. Nevertheless, we find some evidence of timevarying conditional market timing abilities but only at the individual fund level.
Resumo:
Abstract. Interest in design and development of graphical user interface (GUIs) is growing in the last few years. However, correctness of GUI's code is essential to the correct execution of the overall software. Models can help in the evaluation of interactive applications by allowing designers to concentrate on its more important aspects. This paper describes our approach to reverse engineering abstract GUI models directly from the Java/Swing code.
Resumo:
In the last years, it has become increasingly clear that neurodegenerative diseases involve protein aggregation, a process often used as disease progression readout and to develop therapeutic strategies. This work presents an image processing tool to automatic segment, classify and quantify these aggregates and the whole 3D body of the nematode Caenorhabditis Elegans. A total of 150 data set images, containing different slices, were captured with a confocal microscope from animals of distinct genetic conditions. Because of the animals’ transparency, most of the slices pixels appeared dark, hampering their body volume direct reconstruction. Therefore, for each data set, all slices were stacked in one single 2D image in order to determine a volume approximation. The gradient of this image was input to an anisotropic diffusion algorithm that uses the Tukey’s biweight as edge-stopping function. The image histogram median of this outcome was used to dynamically determine a thresholding level, which allows the determination of a smoothed exterior contour of the worm and the medial axis of the worm body from thinning its skeleton. Based on this exterior contour diameter and the medial animal axis, random 3D points were then calculated to produce a volume mesh approximation. The protein aggregations were subsequently segmented based on an iso-value and blended with the resulting volume mesh. The results obtained were consistent with qualitative observations in literature, allowing non-biased, reliable and high throughput protein aggregates quantification. This may lead to a significant improvement on neurodegenerative diseases treatment planning and interventions prevention
Resumo:
Pectus Carinatum (PC) is a chest deformity consisting on the anterior protrusion of the sternum and adjacent costal cartilages. Non-operative corrections, such as the orthotic compression brace, require previous information of the patient chest surface, to improve the overall brace fit. This paper focuses on the validation of the Kinect scanner for the modelling of an orthotic compression brace for the correction of Pectus Carinatum. To this extent, a phantom chest wall surface was acquired using two scanner systems – Kinect and Polhemus FastSCAN – and compared through CT. The results show a RMS error of 3.25mm between the CT data and the surface mesh from the Kinect sensor and 1.5mm from the FastSCAN sensor
Resumo:
Color model representation allows characterizing in a quantitative manner, any defined color spectrum of visible light, i.e. with a wavelength between 400nm and 700nm. To accomplish that, each model, or color space, is associated with a function that allows mapping the spectral power distribution of the visible electromagnetic radiation, in a space defined by a set of discrete values that quantify the color components composing the model. Some color spaces are sensitive to changes in lighting conditions. Others assure the preservation of certain chromatic features, remaining immune to these changes. Therefore, it becomes necessary to identify the strengths and weaknesses of each model in order to justify the adoption of color spaces in image processing and analysis techniques. This chapter will address the topic of digital imaging, main standards and formats. Next we will set the mathematical model of the image acquisition sensor response, which enables assessment of the various color spaces, with the aim of determining their invariance to illumination changes.