932 resultados para Conventional approach
Resumo:
The conventional approach to setting a milling unit is essentially based on the desire to achieve a particular bagasse moisture content or fibre fill in each nip of the mill. This approach relies on the selection of the speed at which the mill will operate for the selected fibre rate. There is rarely any checking that the selected speed or the selected fibre fill is achieved and the same set of assumptions is generally carried over to use again in the next year. The conventional approach largely ignores the fact that the selection of mill settings actually determines the speed at which the mill will operate. Making an adjustment with the intent of changing the performance of the mill often also changes the speed of the mill as an unintended consequence. This paper presents an alternative approach to mill setting. The approach discussed makes use of mill feeding theory to define the relationship between fibre rate, mill speed and mill settings and uses that theory to provide an alternative means of determining the settings in some nips of the mill. Mill feeding theory shows that, as the feed work opening reduces, roll speed increases. The theory also shows that there is an optimal underfeed opening and Donnelly chute exit opening that will minimise roll speed and that the current South African guidelines appear to be well away from those optimal values.
Resumo:
An urgent need exists for indicators of soil health and patch functionality in extensive rangelands that can be measured efficiently and at low cost. Soil mites are candidate indicators, but their identification and handling is so specialised and time-consuming that their inclusion in routine monitoring is unlikely. The aim of this study was to measure the relationship between patch type and mite assemblages using a conventional approach. An additional aim was to determine if a molecular approach traditionally used for soil microbes could be adapted for soil mites to overcome some of the bottlenecks associated with soil fauna diversity assessment. Soil mite species abundance and diversity were measured using conventional ecological methods in soil from patches with perennial grass and litter cover (PGL), and compared to soil from bare patches with annual grasses and/or litter cover (BAL). Soil mite assemblages were also assessed using a molecular method called terminal-restriction fragment length polymorphism (T-RFLP) analysis. The conventional data showed a relationship between patch type and mite assemblage. The Prostigmata and Oribatida were well represented in the PGL sites, particularly the Aphelacaridae (Oribatida). For T-RFLP analysis, the mite community was represented by a series of DNA fragment lengths that reflected mite sequence diversity. The T-RFLP data showed a distinct difference in the mite assemblage between the patch types. Where possible, T-RFLP peaks were matched to mite families using a reference 18S rDNA database, and the Aphelacaridae prevalent in the conventional samples at PGL sites were identified, as were prostigmatids and oribatids. We identified limits to the T-RFLP approach and this included an inability to distinguish some species whose DNA sequences were similar. Despite these limitations, the data still showed a clear difference between sites, and the molecular taxonomic inferences also compared well with the conventional ecological data. The results from this study indicated that the T-RFLP approach was effective in measuring mite assemblages in this system. The power of this technique lies in the fact that species diversity and abundance data can be obtained quickly because of the time taken to process hundreds of samples, from soil DNA extraction to data output on the gene analyser, can be as little as 4 days.
Conventional and Reciprocal Approaches to the Forward and Inverse Problems of Electroencephalography
Resumo:
Le problème inverse en électroencéphalographie (EEG) est la localisation de sources de courant dans le cerveau utilisant les potentiels de surface sur le cuir chevelu générés par ces sources. Une solution inverse implique typiquement de multiples calculs de potentiels de surface sur le cuir chevelu, soit le problème direct en EEG. Pour résoudre le problème direct, des modèles sont requis à la fois pour la configuration de source sous-jacente, soit le modèle de source, et pour les tissues environnants, soit le modèle de la tête. Cette thèse traite deux approches bien distinctes pour la résolution du problème direct et inverse en EEG en utilisant la méthode des éléments de frontières (BEM): l’approche conventionnelle et l’approche réciproque. L’approche conventionnelle pour le problème direct comporte le calcul des potentiels de surface en partant de sources de courant dipolaires. D’un autre côté, l’approche réciproque détermine d’abord le champ électrique aux sites des sources dipolaires quand les électrodes de surfaces sont utilisées pour injecter et retirer un courant unitaire. Le produit scalaire de ce champ électrique avec les sources dipolaires donne ensuite les potentiels de surface. L’approche réciproque promet un nombre d’avantages par rapport à l’approche conventionnelle dont la possibilité d’augmenter la précision des potentiels de surface et de réduire les exigences informatiques pour les solutions inverses. Dans cette thèse, les équations BEM pour les approches conventionnelle et réciproque sont développées en utilisant une formulation courante, la méthode des résidus pondérés. La réalisation numérique des deux approches pour le problème direct est décrite pour un seul modèle de source dipolaire. Un modèle de tête de trois sphères concentriques pour lequel des solutions analytiques sont disponibles est utilisé. Les potentiels de surfaces sont calculés aux centroïdes ou aux sommets des éléments de discrétisation BEM utilisés. La performance des approches conventionnelle et réciproque pour le problème direct est évaluée pour des dipôles radiaux et tangentiels d’excentricité variable et deux valeurs très différentes pour la conductivité du crâne. On détermine ensuite si les avantages potentiels de l’approche réciproquesuggérés par les simulations du problème direct peuvent êtres exploités pour donner des solutions inverses plus précises. Des solutions inverses à un seul dipôle sont obtenues en utilisant la minimisation par méthode du simplexe pour à la fois l’approche conventionnelle et réciproque, chacun avec des versions aux centroïdes et aux sommets. Encore une fois, les simulations numériques sont effectuées sur un modèle à trois sphères concentriques pour des dipôles radiaux et tangentiels d’excentricité variable. La précision des solutions inverses des deux approches est comparée pour les deux conductivités différentes du crâne, et leurs sensibilités relatives aux erreurs de conductivité du crâne et au bruit sont évaluées. Tandis que l’approche conventionnelle aux sommets donne les solutions directes les plus précises pour une conductivité du crâne supposément plus réaliste, les deux approches, conventionnelle et réciproque, produisent de grandes erreurs dans les potentiels du cuir chevelu pour des dipôles très excentriques. Les approches réciproques produisent le moins de variations en précision des solutions directes pour différentes valeurs de conductivité du crâne. En termes de solutions inverses pour un seul dipôle, les approches conventionnelle et réciproque sont de précision semblable. Les erreurs de localisation sont petites, même pour des dipôles très excentriques qui produisent des grandes erreurs dans les potentiels du cuir chevelu, à cause de la nature non linéaire des solutions inverses pour un dipôle. Les deux approches se sont démontrées également robustes aux erreurs de conductivité du crâne quand du bruit est présent. Finalement, un modèle plus réaliste de la tête est obtenu en utilisant des images par resonace magnétique (IRM) à partir desquelles les surfaces du cuir chevelu, du crâne et du cerveau/liquide céphalorachidien (LCR) sont extraites. Les deux approches sont validées sur ce type de modèle en utilisant des véritables potentiels évoqués somatosensoriels enregistrés à la suite de stimulation du nerf médian chez des sujets sains. La précision des solutions inverses pour les approches conventionnelle et réciproque et leurs variantes, en les comparant à des sites anatomiques connus sur IRM, est encore une fois évaluée pour les deux conductivités différentes du crâne. Leurs avantages et inconvénients incluant leurs exigences informatiques sont également évalués. Encore une fois, les approches conventionnelle et réciproque produisent des petites erreurs de position dipolaire. En effet, les erreurs de position pour des solutions inverses à un seul dipôle sont robustes de manière inhérente au manque de précision dans les solutions directes, mais dépendent de l’activité superposée d’autres sources neurales. Contrairement aux attentes, les approches réciproques n’améliorent pas la précision des positions dipolaires comparativement aux approches conventionnelles. Cependant, des exigences informatiques réduites en temps et en espace sont les avantages principaux des approches réciproques. Ce type de localisation est potentiellement utile dans la planification d’interventions neurochirurgicales, par exemple, chez des patients souffrant d’épilepsie focale réfractaire qui ont souvent déjà fait un EEG et IRM.
Resumo:
Selective neck dissection (SND) in clinical N-0 (cN(0)) cases of oral squamous cell carcinoma (SCC) has been performed by surgeons using a retroauricular or modified facelift approach with robotic or endoscopic assistance. However, these procedures provide cosmetic satisfaction at the cost of possible maximal invasiveness. In this prospective study, we introduced and evaluated the feasibility as well as surgical invasiveness and cosmetic outcome of endoscopically-assisted SND via a small submandibular approach.Forty-four patients with cT(1-2)N(0) oral SCC (OSCC) were randomly divided into two groups of endoscopically-assisted SND and conventional SND. Perioperative and postoperative outcomes of patients were evaluated, including the length of the incision, operating time for neck dissection, estimated blood loss during the operation, amount and duration of drainage, total hospitalization period, total number of lymph nodes retrieved, satisfaction scores based on the cosmetic results, perioperative local complications, shoulder syndrome, and follow-up information.The mean operation time in the endoscopically-assisted group (126.04 +/- A 12.67 min) was longer than that in the conventional group (75.67 +/- A 16.67 min). However, the mean length of the incision was 4.33 +/- A 0.76 cm in the endoscopically-assisted SND group, and the amount and duration of drainage, total hospital stay, postoperative shoulder pain score, and cosmetic outcomes were superior in the endoscopically-assisted SND group. Additionally, the retrieved lymph nodes and complications were comparable.Endoscopically-assisted SND via a small submandibular approach had a longer operation time than the conventional approach. However, endoscopically-assisted SND was feasible and reliable while providing minimal invasiveness and satisfactory appearance.
Resumo:
The novel approach to carbon capture and storage (CCS) described in this dissertation is a significant departure from the conventional approach to CCS. The novel approach uses a sodium carbonate solution to first capture CO2 from post combustion flue gas streams. The captured CO2 is then reacted with an alkaline industrial waste material, at ambient conditions, to regenerate the carbonate solution and permanently store the CO2 in the form of an added value carbonate mineral. Conventional CCS makes use of a hazardous amine solution for CO2 capture, a costly thermal regeneration stage, and the underground storage of supercritical CO2. The objective of the present dissertation was to examine each individual stage (capture and storage) of the proposed approach to CCS. Study of the capture stage found that a 2% w/w sodium carbonate solution was optimal for CO2 absorption in the present system. The 2% solution yielded the best tradeoff between the CO2 absorption rate and the CO2 absorption capacity of the solutions tested. Examination of CO2 absorption in the presence of flue gas impurities (NOx and SOx) found that carbonate solutions possess a significant advantage over amine solutions, that they could be used for multi-pollutant capture. All the NOx and SOx fed to the carbonate solution was able to be captured. Optimization studies found that it was possible to increase the absorption rate of CO2 into the carbonate solution by adding a surfactant to the solution to chemically alter the gas bubble size. The absorption rate of CO2 was increased by as much as 14%. Three coal combustion fly ash materials were chosen as the alkaline industrial waste materials to study the storage CO2 and regeneration the absorbent. X-ray diffraction analysis on reacted fly ash samples confirmed that the captured CO2 reacts with the fly ash materials to form a carbonate mineral, specifically calcite. Studies found that after a five day reaction time, 75% utilization of the waste material for CO2 storage could be achieved, while regenerating the absorbent. The regenerated absorbent exhibited a nearly identical CO2 absorption capacity and CO2 absorption rate as a fresh Na2CO3 solution.
Resumo:
For decades, marketing and marketing research have been based on a concept of consumer behaviour that is deeply embedded in a linear notion of marketing activities. With increasing regularity, key organising frameworks for marketing and marketing activities are being challenged by academics and practitioners alike. In turn, this has led to the search for new approaches and tools that will help marketers understand the interaction among attitudes, emotions and product/brand choice. More recently, the approach developed by Harvard Professor, Gerald Zaltman, referred to as the Zaltman Metaphor Elicitation Technique (ZMET) has gained considerable interest. This paper seeks to demonstrate the effectiveness of this alternative qualitative method, using a non-conventional approach, thus providing a useful contribution to the qualitative research area.
Resumo:
There is an urgent need in terms of changing world conditions to move beyond the dualist paradigm that has traditionally informed design research, education and practice. Rather than attempt to reduce uncertainty, novelty and complexity as is the conventional approach, an argument is presented in this article that seeks to exploit these qualities through a reconceptualisation of design in creative as well as systematic, rigorous and ethical terms. Arts-based research, which 'brings together the systematic and rigorous qualities of inquiry with the creative and imaginative qualities of the arts', is presented as being central to this reconceptualisation. This is exemplified in the application of art-informed inquiry in a research unit for graduating tertiary-level interior design students. The application is described in this article and is shown to rely substantially on the image and its capacity to open up and reveal new possibilities and meaning.
Resumo:
The use of stable isotope ratios δ18O and δ2H are well established in assessment of groundwater systems and their hydrology. The conventional approach is based on x/y plots and relation to various MWL’s, and plots of either ratio against parameters such as Clor EC. An extension of interpretation is the use of 2D maps and contour plots, and 2D hydrogeological vertical sections. An enhancement of presentation and interpretation is the production of “isoscapes”, usually as 2.5D surface projections. We have applied groundwater isotopic data to a 3D visualisation, using the alluvial aquifer system of the Lockyer Valley. The 3D framework is produced in GVS (Groundwater Visualisation System). This format enables enhanced presentation by displaying the spatial relationships and allowing interpolation between “data points” i.e. borehole screened zones where groundwater enters. The relative variations in the δ18O and δ2H values are similar in these ambient temperature systems. However, δ2H better reflects hydrological processes, whereas δ18O also reflects aquifer/groundwater exchange reactions. The 3D model has the advantage that it displays borehole relations to spatial features, enabling isotopic ratios and their values to be associated with, for example, bedrock groundwater mixing, interaction between aquifers, relation to stream recharge, and to near-surface and return irrigation water evaporation. Some specific features are also shown, such as zones of leakage of deeper groundwater (in this case with a GAB signature). Variations in source of recharging water at a catchment scale can be displayed. Interpolation between bores is not always possible depending on numbers and spacing, and by elongate configuration of the alluvium. In these cases, the visualisation uses discs around the screens that can be manually expanded to test extent or intersections. Separate displays are used for each of δ18O and δ2H and colour coding for isotope values.
Resumo:
This thesis presents the outcomes of a comprehensive research study undertaken to investigate the influence of rainfall and catchment characteristics on urban stormwater quality. The knowledge created is expected to contribute to a greater understanding of urban stormwater quality and thereby enhance the design of stormwater quality treatment systems. The research study was undertaken based on selected urban catchments in Gold Coast, Australia. The research methodology included field investigations, laboratory testing, computer modelling and data analysis. Both univariate and multivariate data analysis techniques were used to investigate the influence of rainfall and catchment characteristics on urban stormwater quality. The rainfall characteristics investigated included average rainfall intensity and rainfall duration whilst catchment characteristics included land use, impervious area percentage, urban form and pervious area location. The catchment scale data for the analysis was obtained from four residential catchments, including rainfall-runoff records, drainage network data, stormwater quality data and land use and land cover data. Pollutants build-up samples were collected from twelve road surfaces in residential, commercial and industrial land use areas. The relationships between rainfall characteristics, catchment characteristics and urban stormwater quality were investigated based on residential catchments and then extended to other land uses. Based on the influence rainfall characteristics exert on urban stormwater quality, rainfall events can be classified into three different types, namely, high average intensity-short duration (Type 1), high average intensity-long duration (Type 2) and low average intensity-long duration (Type 3). This provides an innovative approach to conventional modelling which does not commonly relate stormwater quality to rainfall characteristics. Additionally, it was found that the threshold intensity for pollutant wash-off from urban catchments is much less than for rural catchments. High average intensity-short duration rainfall events are cumulatively responsible for the generation of a major fraction of the annual pollutants load compared to the other rainfall event types. Additionally, rainfall events less than 1 year ARI such as 6- month ARI should be considered for treatment design as they generate a significant fraction of the annual runoff volume and by implication a significant fraction of the pollutants load. This implies that stormwater treatment designs based on larger rainfall events would not be feasible in the context of cost-effectiveness, efficiency in treatment performance and possible savings in land area needed. This also suggests that the simulation of long-term continuous rainfall events for stormwater treatment design may not be needed and that event based simulations would be adequate. The investigations into the relationship between catchment characteristics and urban stormwater quality found that other than conventional catchment characteristics such as land use and impervious area percentage, other catchment characteristics such as urban form and pervious area location also play important roles in influencing urban stormwater quality. These outcomes point to the fact that the conventional modelling approach in the design of stormwater quality treatment systems which is commonly based on land use and impervious area percentage would be inadequate. It was also noted that the small uniformly urbanised areas within a larger mixed catchment produce relatively lower variations in stormwater quality and as expected lower runoff volume with the opposite being the case for large mixed use urbanised catchments. Therefore, a decentralised approach to water quality treatment would be more effective rather than an "end-of-pipe" approach. The investigation of pollutants build-up on different land uses showed that pollutant build-up characteristics vary even within the same land use. Therefore, the conventional approach in stormwater quality modelling, which is based solely on land use, may prove to be inappropriate. Industrial land use has relatively higher variability in maximum pollutant build-up, build-up rate and particle size distribution than the other two land uses. However, commercial and residential land uses had relatively higher variations of nutrients and organic carbon build-up. Additionally, it was found that particle size distribution had a relatively higher variability for all three land uses compared to the other build-up parameters. The high variability in particle size distribution for all land uses illustrate the dissimilarities associated with the fine and coarse particle size fractions even within the same land use and hence the variations in stormwater quality in relation to pollutants adsorbing to different sizes of particles.
Resumo:
Many applications can benefit from the accurate surface temperature estimates that can be made using a passive thermal-infrared camera. However, the process of radiometric calibration which enables this can be both expensive and time consuming. An ad hoc approach for performing radiometric calibration is proposed which does not require specialized equipment and can be completed in a fraction of the time of the conventional method. The proposed approach utilizes the mechanical properties of the camera to estimate scene temperatures automatically, and uses these target temperatures to model the effect of sensor temperature on the digital output. A comparison with a conventional approach using a blackbody radiation source shows that the accuracy of the method is sufficient for many tasks requiring temperature estimation. Furthermore, a novel visualization method is proposed for displaying the radiometrically calibrated images to human operators. The representation employs an intuitive coloring scheme and allows the viewer to perceive a large variety of temperatures accurately.
Resumo:
Objective To understand how the formal curriculum experience of an Australian undergraduate pharmacy program supports students’ professional identity formation. Methods A qualitative ethnographic study was conducted over four weeks using participant observation and examined the ‘typical’ student experience from the perspective of a pharmacist. A one-week period of observation was undertaken with each of the four year groups (that is, for years one to four) comprising the undergraduate curriculum. Data were collected through observation of the formal curriculum experience using field notes, a reflective journal and informal interviews with 38 pharmacy students. Data were analyzed thematically using an a priori analytical framework. Results Our findings showed that the observed curriculum was a conventional curricular experience which focused on the provision of technical knowledge and provided some opportunities for practical engagement. There were some opportunities for students to imagine themselves as pharmacists, for example, when the lecture content related to practice or teaching staff described their approach to practice problems. However, there were limited opportunities for students to observe pharmacist role models, experiment with being a pharmacist or evaluate their professional identities. While curricular learning activities were available for students to develop as pharmacists e.g. patient counseling, there was no contact with patients and pharmacist academic staff tended to role model as educators with little evidence of their pharmacist selves. Conclusion These findings suggest that the current conventional approach to the curriculum design may not be fully enabling learning experiences which support students in successfully negotiating their professional identities. Instead it appeared to reinforce their identities as students with a naïve understanding of professional practice, making their future transition to professional practice challenging.
Resumo:
Ever growing populations in cities are associated with a major increase in road vehicles and air pollution. The overall high levels of urban air pollution have been shown to be of a significant risk to city dwellers. However, the impacts of very high but temporally and spatially restricted pollution, and thus exposure, are still poorly understood. Conventional approaches to air quality monitoring are based on networks of static and sparse measurement stations. However, these are prohibitively expensive to capture tempo-spatial heterogeneity and identify pollution hotspots, which is required for the development of robust real-time strategies for exposure control. Current progress in developing low-cost micro-scale sensing technology is radically changing the conventional approach to allow real-time information in a capillary form. But the question remains whether there is value in the less accurate data they generate. This article illustrates the drivers behind current rises in the use of low-cost sensors for air pollution management in cities, whilst addressing the major challenges for their effective implementation.
Resumo:
This paper presents an efficient noniterative method for distribution state estimation using conditional multivariate complex Gaussian distribution (CMCGD). In the proposed method, the mean and standard deviation (SD) of the state variables is obtained in one step considering load uncertainties, measurement errors, and load correlations. In this method, first the bus voltages, branch currents, and injection currents are represented by MCGD using direct load flow and a linear transformation. Then, the mean and SD of bus voltages, or other states, are calculated using CMCGD and estimation of variance method. The mean and SD of pseudo measurements, as well as spatial correlations between pseudo measurements, are modeled based on the historical data for different levels of load duration curve. The proposed method can handle load uncertainties without using time-consuming approaches such as Monte Carlo. Simulation results of two case studies, six-bus, and a realistic 747-bus distribution network show the effectiveness of the proposed method in terms of speed, accuracy, and quality against the conventional approach.
Resumo:
In routine industrial design, fatigue life estimation is largely based on S-N curves and ad hoc cycle counting algorithms used with Miner's rule for predicting life under complex loading. However, there are well known deficiencies of the conventional approach. Of the many cumulative damage rules that have been proposed, Manson's Double Linear Damage Rule (DLDR) has been the most successful. Here we follow up, through comparisons with experimental data from many sources, on a new approach to empirical fatigue life estimation (A Constructive Empirical Theory for Metal Fatigue Under Block Cyclic Loading', Proceedings of the Royal Society A, in press). The basic modeling approach is first described: it depends on enforcing mathematical consistency between predictions of simple empirical models that include indeterminate functional forms, and published fatigue data from handbooks. This consistency is enforced through setting up and (with luck) solving a functional equation with three independent variables and six unknown functions. The model, after eliminating or identifying various parameters, retains three fitted parameters; for the experimental data available, one of these may be set to zero. On comparison against data from several different sources, with two fitted parameters, we find that our model works about as well as the DLDR and much better than Miner's rule. We finally discuss some ways in which the model might be used, beyond the scope of the DLDR.