905 resultados para Advanced Model Approach (AMA)
Resumo:
HYPOTHESIS A multielectrode probe in combination with an optimized stimulation protocol could provide sufficient sensitivity and specificity to act as an effective safety mechanism for preservation of the facial nerve in case of an unsafe drill distance during image-guided cochlear implantation. BACKGROUND A minimally invasive cochlear implantation is enabled by image-guided and robotic-assisted drilling of an access tunnel to the middle ear cavity. The approach requires the drill to pass at distances below 1 mm from the facial nerve and thus safety mechanisms for protecting this critical structure are required. Neuromonitoring is currently used to determine facial nerve proximity in mastoidectomy but lacks sensitivity and specificity necessaries to effectively distinguish the close distance ranges experienced in the minimally invasive approach, possibly because of current shunting of uninsulated stimulating drilling tools in the drill tunnel and because of nonoptimized stimulation parameters. To this end, we propose an advanced neuromonitoring approach using varying levels of stimulation parameters together with an integrated bipolar and monopolar stimulating probe. MATERIALS AND METHODS An in vivo study (sheep model) was conducted in which measurements at specifically planned and navigated lateral distances from the facial nerve were performed to determine if specific sets of stimulation parameters in combination with the proposed neuromonitoring system could reliably detect an imminent collision with the facial nerve. For the accurate positioning of the neuromonitoring probe, a dedicated robotic system for image-guided cochlear implantation was used and drilling accuracy was corrected on postoperative microcomputed tomographic images. RESULTS From 29 trajectories analyzed in five different subjects, a correlation between stimulus threshold and drill-to-facial nerve distance was found in trajectories colliding with the facial nerve (distance <0.1 mm). The shortest pulse duration that provided the highest linear correlation between stimulation intensity and drill-to-facial nerve distance was 250 μs. Only at low stimulus intensity values (≤0.3 mA) and with the bipolar configurations of the probe did the neuromonitoring system enable sufficient lateral specificity (>95%) at distances to the facial nerve below 0.5 mm. However, reduction in stimulus threshold to 0.3 mA or lower resulted in a decrease of facial nerve distance detection range below 0.1 mm (>95% sensitivity). Subsequent histopathology follow-up of three representative cases where the neuromonitoring system could reliably detect a collision with the facial nerve (distance <0.1 mm) revealed either mild or inexistent damage to the nerve fascicles. CONCLUSION Our findings suggest that although no general correlation between facial nerve distance and stimulation threshold existed, possibly because of variances in patient-specific anatomy, correlations at very close distances to the facial nerve and high levels of specificity would enable a binary response warning system to be developed using the proposed probe at low stimulation currents.
Resumo:
This thesis presents a theoretical investigation on applications of Raman effect in optical fibre communication as well as the design and optimisation of various Raman based devices and transmission schemes. The techniques used are mainly based on numerical modelling. The results presented in this thesis are divided into three main parts. First, novel designs of Raman fibre lasers (RFLs) based on Phosphosilicate core fibre are analysed and optimised for efficiency by using a discrete power balance model. The designs include a two stage RFL based on Phosphosilicate core fibre for telecommunication applications, a composite RFL for the 1.6 μm spectral window, and a multiple output wavelength RFL aimed to be used as a compact pump source for fiat gain Raman amplifiers. The use of Phosphosilicate core fibre is proven to effectively reduce the design complexity and hence leads to a better efficiency, stability and potentially lower cost. Second, the generalised Raman amplified gain model approach based on the power balance analysis and direct numerical simulation is developed. The approach can be used to effectively simulate optical transmission systems with distributed Raman amplification. Last, the potential employment of a hybrid amplification scheme, which is a combination between a distributed Raman amplifier and Erbium doped amplifier, is investigated by using the generalised Raman amplified gain model. The analysis focuses on the use of the scheme to upgrade a standard fibre network to 40 Gb/s system.
Resumo:
Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. ^ Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. ^ Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. ^ With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.^
Resumo:
Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.
Resumo:
As the world’s population is growing, so is the demand for agricultural products. However, natural nitrogen (N) fixation and phosphorus (P) availability cannot sustain the rising agricultural production, thus, the application of N and P fertilisers as additional nutrient sources is common. It is those anthropogenic activities that can contribute high amounts of organic and inorganic nutrients to both surface and groundwaters resulting in degradation of water quality and a possible reduction of aquatic life. In addition, runoff and sewage from urban and residential areas can contain high amounts of inorganic and organic nutrients which may also affect water quality. For example, blooms of the cyanobacterium Lyngbya majuscula along the coastline of southeast Queensland are an indicator of at least short term decreases of water quality. Although Australian catchments, including those with intensive forms of land use, show in general a low export of nutrients compared to North American and European catchments, certain land use practices may still have a detrimental effect on the coastal environment. Numerous studies are reported on nutrient cycling and associated processes on a catchment scale in the Northern Hemisphere. Comparable studies in Australia, in particular in subtropical regions are, however, limited and there is a paucity in the data, in particular for inorganic and organic forms of nitrogen and phosphorus; these nutrients are important limiting factors in surface waters to promote algal blooms. Therefore, the monitoring of N and P and understanding the sources and pathways of these nutrients within a catchment is important in coastal zone management. Although Australia is the driest continent, in subtropical regions such as southeast Queensland, rainfall patterns have a significant effect on runoff and thus the nutrient cycle at a catchment scale. Increasingly, these rainfall patterns are becoming variable. The monitoring of these climatic conditions and the hydrological response of agricultural catchments is therefore also important to reduce the anthropogenic effects on surface and groundwater quality. This study consists of an integrated hydrological–hydrochemical approach that assesses N and P in an environment with multiple land uses. The main aim is to determine the nutrient cycle within a representative coastal catchment in southeast Queensland, the Elimbah Creek catchment. In particular, the investigation confirms the influence associated with forestry and agriculture on N and P forms, sources, distribution and fate in the surface and groundwaters of this subtropical setting. In addition, the study determines whether N and P are subject to transport into the adjacent estuary and thus into the marine environment; also considered is the effect of local topography, soils and geology on N and P sources and distribution. The thesis is structured on four components individually reported. The first paper determines the controls of catchment settings and processes on stream water, riverbank sediment, and shallow groundwater N and P concentrations, in particular during the extended dry conditions that were encountered during the study. Temporal and spatial factors such as seasonal changes, soil character, land use and catchment morphology are considered as well as their effect on controls over distributions of N and P in surface waters and associated groundwater. A total number of 30 surface and 13 shallow groundwater sampling sites were established throughout the catchment to represent dominant soil types and the land use upstream of each sampling location. Sampling comprises five rounds and was conducted over one year between October 2008 and November 2009. Surface water and groundwater samples were analysed for all major dissolved inorganic forms of N and for total N. Phosphorus was determined in the form of dissolved reactive P (predominantly orthophosphate) and total P. In addition, extracts of stream bank sediments and soil grab samples were analysed for these N and P species. Findings show that major storm events, in particular after long periods of drought conditions, are the driving force of N cycling. This is expressed by higher inorganic N concentrations in the agricultural subcatchment compared to the forested subcatchment. Nitrate N is the dominant inorganic form of N in both the surface and groundwaters and values are significantly higher in the groundwaters. Concentrations in the surface water range from 0.03 to 0.34 mg N L..1; organic N concentrations are considerably higher (average range: 0.33 to 0.85 mg N L..1), in particular in the forested subcatchment. Average NO3-N in the groundwater has a range of 0.39 to 2.08 mg N L..1, and organic N averages between 0.07 and 0.3 mg N L..1. The stream bank sediments are dominated by organic N (range: 0.53 to 0.65 mg N L..1), and the dominant inorganic form of N is NH4-N with values ranging between 0.38 and 0.41 mg N L..1. Topography and soils, however, were not to have a significant effect on N and P concentrations in waters. Detectable phosphorus in the surface and groundwaters of the catchment is limited to several locations typically in the proximity of areas with intensive animal use; in soil and sediments, P is negligible. In the second paper, the stable isotopes of N (14N/15N) and H2O (16O/18O and 2H/H) in surface and groundwaters are used to identify sources of dissolved inorganic and organic N in these waters, and to determine their pathways within the catchment; specific emphasis is placed on the relation of forestry and agriculture. Forestry is predominantly concentrated in the northern subcatchment (Beerburrum Creek) while agriculture is mainly found in the southern subcatchment (Six Mile Creek). Results show that agriculture (horticulture, crops, grazing) is the main source of inorganic N in the surface waters of the agricultural subcatchment, and their isotopic signature shows a close link to evaporation processes that may occur during water storage in farm dams that are used for irrigation. Groundwaters are subject to denitrification processes that may result in reduced dissolved inorganic N concentrations. Soil organic matter delivers most of the inorganic N to the surface water in the forested subcatchment. Here, precipitation and subsequently runoff is the main source of the surface waters. Groundwater in this area is affected by agricultural processes. The findings also show that the catchment can attenuate the effects of anthropogenic land use on surface water quality. Riparian strips of natural remnant vegetation, commonly 50 to 100 m in width, act as buffer zones along the drainage lines in the catchment and remove inorganic N from the soil water before it enters the creek. These riparian buffer zones are common in most agricultural catchments of southeast Queensland and are indicated to reduce the impact of agriculture on stream water quality and subsequently on the estuary and marine environments. This reduction is expressed by a significant decrease in DIN concentrations from 1.6 mg N L..1 to 0.09 mg N L..1, and a decrease in the �15N signatures from upstream surface water locations downstream to the outlet of the agricultural subcatchment. Further testing is, however, necessary to confirm these processes. Most importantly, the amount of N that is transported to the adjacent estuary is shown to be negligible. The third and fourth components of the thesis use a hydrological catchment model approach to determine the water balance of the Elimbah Creek catchment. The model is then used to simulate the effects of land use on the water balance and nutrient loads of the study area. The tool that is used is the internationally widely applied Soil and Water Assessment Tool (SWAT). Knowledge about the water cycle of a catchment is imperative in nutrient studies as processes such as rainfall, surface runoff, soil infiltration and routing of water through the drainage system are the driving forces of the catchment nutrient cycle. Long-term information about discharge volumes of the creeks and rivers do, however, not exist for a number of agricultural catchments in southeast Queensland, and such information is necessary to calibrate and validate numerical models. Therefore, a two-step modelling approach was used to calibrate and validate parameters values from a near-by gauged reference catchment as starting values for the ungauged Elimbah Creek catchment. Transposing monthly calibrated and validated parameter values from the reference catchment to the ungauged catchment significantly improved model performance showing that the hydrological model of the catchment of interest is a strong predictor of the water water balance. The model efficiency coefficient EF shows that 94% of the simulated discharge matches the observed flow whereas only 54% of the observed streamflow was simulated by the SWAT model prior to using the validated values from the reference catchment. In addition, the hydrological model confirmed that total surface runoff contributes the majority of flow to the surface water in the catchment (65%). Only a small proportion of the water in the creek is contributed by total base-flow (35%). This finding supports the results of the stable isotopes 16O/18O and 2H/H, which show the main source of water in the creeks is either from local precipitation or irrigation waters delivered by surface runoff; a contribution from the groundwater (baseflow) to the creeks could not be identified using 16O/18O and 2H/H. In addition, the SWAT model calculated that around 68% of the rainfall occurring in the catchment is lost through evapotranspiration reflecting the prevailing long-term drought conditions that were observed prior and during the study. Stream discharge from the forested subcatchment was an order of magnitude lower than discharge from the agricultural Six Mile Creek subcatchment. A change in land use from forestry to agriculture did not significantly change the catchment water balance, however, nutrient loads increased considerably. Conversely, a simulated change from agriculture to forestry resulted in a significant decrease of nitrogen loads. The findings of the thesis and the approach used are shown to be of value to catchment water quality monitoring on a wider scale, in particular the implications of mixed land use on nutrient forms, distributions and concentrations. The study confirms that in the tropics and subtropics the water balance is affected by extended dry periods and seasonal rainfall with intensive storm events. In particular, the comprehensive data set of inorganic and organic N and P forms in the surface and groundwaters of this subtropical setting acquired during the one year sampling program may be used in similar catchment hydrological studies where these detailed information is missing. Also, the study concludes that riparian buffer zones along the catchment drainage system attenuate the transport of nitrogen from agricultural sources in the surface water. Concentrations of N decreased from upstream to downstream locations and were negligible at the outlet of the catchment.
Resumo:
This thesis reports on an investigation to develop an advanced and comprehensive milling process model of the raw sugar factory. Although the new model can be applied to both, the four-roller and six-roller milling units, it is primarily developed for the six-roller mills which are widely used in the Australian sugar industry. The approach taken was to gain an understanding of the previous milling process simulation model "MILSIM" developed at the University of Queensland nearly four decades ago. Although the MILSIM model was widely adopted in the Australian sugar industry for simulating the milling process it did have some incorrect assumptions. The study aimed to eliminate all the incorrect assumptions of the previous model and develop an advanced model that represents the milling process correctly and tracks the flow of other cane components in the milling process which have not been considered in the previous models. The development of the milling process model was done is three stages. Firstly, an enhanced milling unit extraction model (MILEX) was developed to access the mill performance parameters and predict the extraction performance of the milling process. New definitions for the milling performance parameters were developed and a complete milling train along with the juice screen was modelled. The MILEX model was validated with factory data and the variation in the mill performance parameters was observed and studied. Some case studies were undertaken to study the effect of fibre in juice streams, juice in cush return and imbibition% fibre on extraction performance of the milling process. It was concluded from the study that the empirical relations developed for the mill performance parameters in the MILSIM model were not applicable to the new model. New empirical relations have to be developed before the model is applied with confidence. Secondly, a soluble and insoluble solids model was developed using modelling theory and experimental data to track the flow of sucrose (pol), reducing sugars (glucose and fructose), soluble ash, true fibre and mud solids entering the milling train through the cane supply and their distribution in juice and bagasse streams.. The soluble impurities and mud solids in cane affect the performance of the milling train and further processing of juice and bagasse. New mill performance parameters were developed in the model to track the flow of cane components. The developed model is the first of its kind and provides some additional insight regarding the flow of soluble and insoluble cane components and the factors affecting their distribution in juice and bagasse. The model proved to be a good extension to the MILEX model to study the overall performance of the milling train. Thirdly, the developed models were incorporated in a proprietary software package "SysCAD’ for advanced operational efficiency and for availability in the ‘whole of factory’ model. The MILEX model was developed in SysCAD software to represent a single milling unit. Eventually the entire milling train and the juice screen were developed in SysCAD using series of different controllers and features of the software. The models developed in SysCAD can be run from macro enabled excel file and reports can be generated in excel sheets. The flexibility of the software, ease of use and other advantages are described broadly in the relevant chapter. The MILEX model is developed in static mode and dynamic mode. The application of the dynamic mode of the model is still under progress.
Resumo:
Bangkok Metropolitan Region (BMR) is the centre for various major activities in Thailand including political, industry, agriculture, and commerce. Consequently, the BMR is the highest and most densely populated area in Thailand. Thus, the demand for houses in the BMR is also the largest, especially in subdivision developments. For these reasons, the subdivision development in the BMR has increased substantially in the past 20 years and generated large numbers of subdivision developments (AREA, 2009; Kridakorn Na Ayutthaya & Tochaiwat, 2010). However, this dramatic growth of subdivision development has caused several problems including unsustainable development, especially for subdivision neighbourhoods, in the BMR. There have been rating tools that encourage the sustainability of neighbourhood design in subdivision development, but they still have practical problems. Such rating tools do not cover the scale of the development entirely; and they concentrate more on the social and environmental conservation aspects, which have not been totally accepted by the developers (Boonprakub, 2011; Tongcumpou & Harvey, 1994). These factors strongly confirm the need for an appropriate rating tool for sustainable subdivision neighbourhood design in the BMR. To improve level of acceptance from all stakeholders in subdivision developments industry, the new rating tool should be developed based on an approach that unites the social, environmental, and economic approaches, such as eco-efficiency principle. Eco-efficiency is the sustainability indicator introduced by the World Business Council for Sustainable Development (WBCSD) since 1992. The eco-efficiency is defined as the ratio of the product or service value according to its environmental impact (Lehni & Pepper, 2000; Sorvari et al., 2009). Eco-efficiency indicator is concerned to the business, while simultaneously, is concerned with to social and the environment impact. This study aims to develop a new rating tool named "Rating for sustainable subdivision neighbourhood design (RSSND)". The RSSND methodology is developed by a combination of literature reviews, field surveys, the eco-efficiency model development, trial-and-error technique, and the tool validation process. All required data has been collected by the field surveys from July to November 2010. The ecoefficiency model is a combination of three different mathematical models; the neighbourhood property price (NPP) model, the neighbourhood development cost (NDC) model, and the neighbourhood occupancy cost (NOC) model which are attributable to the neighbourhood subdivision design. The NPP model is formulated by hedonic price model approach, while the NDC model and NOC model are formulated by the multiple regression analysis approach. The trial-and-error technique is adopted for simplifying the complex mathematic eco-efficiency model to a user-friendly rating tool format. Credibility of the RSSND has been validated by using both rated and non-rated of eight subdivisions. It is expected to meet the requirements of all stakeholders which support the social activities of the residents, maintain the environmental condition of the development and surrounding areas, and meet the economic requirements of the developers.
A tag-based personalized item recommendation system using tensor modeling and topic model approaches
Resumo:
This research falls in the area of enhancing the quality of tag-based item recommendation systems. It aims to achieve this by employing a multi-dimensional user profile approach and by analyzing the semantic aspects of tags. Tag-based recommender systems have two characteristics that need to be carefully studied in order to build a reliable system. Firstly, the multi-dimensional correlation, called as tag assignment
Resumo:
Wheel bearings play a crucial role in the mobility of a vehicle by minimizing motive power loss and providing stability in cornering maneuvers. Detailed engineering analysis of a wheel bearing subsystem under dynamic conditions poses enormous challenges due to the nonlinearity of the problem caused by multiple factional contacts between rotating and stationary parts and difficulties in prediction of dynamic loads that wheels are subject to. Commonly used design methodologies are based on equivalent static analysis of ball or roller bearings in which the latter elements may even be represented with springs. In the present study, an advanced hybrid approach is suggested for realistic dynamic analysis of wheel bearings by combining lumped parameter and finite element modeling techniques. A validated lumped parameter representation serves as an efficient tool for the prediction of radial wheel load due to ground reaction which is then used in detailed finite element analysis that automatically accounts for contact forces in an explicit formulation.
Resumo:
Within a chiral constituent quark model approach, η-meson production on the proton via electromagnetic and hadron probes is studied. With few parameters, the differential cross section and polarized beam asymmetry for γp → ηp and differential cross section for π − p → ηn processes are calculated and successfully compared with the data in the center-of-mass energy range from threshold up to 2 GeV. The five known resonances S11(1535), S11(1650), P13(1720),D13(1520), and F15(1680) are found to be dominant in the reaction mechanisms in both channels. Possible roles played by new resonances are also investigated; and in the photoproduction channel, significant contribution from S11 and D15 resonances, with masses around 1715 and 2090 MeV, respectively, are deduced. For the so-called missing resonances, no evidence is found within the investigated reactions. The helicity amplitudes and decay widths of N ∗ → πN, ηN are also presented and found to be consistent with the Particle Data Group values.
Resumo:
In this report we investigate eta-meson productions oil the proton via electromagnetic and hadron probes in a chiral quark model approach. The observables, such as, differential cross section and beam asymmetry for the two productions are calculated and compared with the experiment. The five known resonances S-11(1535) S-11(1650); P-13(1720) D-13(1520), and F-15(1680) are found to be dominant in the reaction mech-anisms in both channels. Significant, contribution from a new S-11 resonances are deduced. For the so-called "missing resonances", no evidence is found within the investigated reactions. The partial wave amplitudes for pi(-)p -> eta n are also presented.
Resumo:
A chiral constituent quark model approach, embodying s- and u-channel exchanges, complemented with a Reggeized treatment for the t channel is presented. A model is obtained allowing data for pi(-)p ->eta n and gamma p ->eta p to be described satisfactorily. For the latter reaction, recently released data by the CLAS and CBELSA/TAPS Collaborations in the system total energy range 1.6 less than or similar to W less than or similar to 2.8 GeV are well reproduced by the inclusion of Reggeized trajectories instead of simple. and. poles. The contribution from "missing" resonances, with masses below 2 GeV, is found to be negligible in the considered processes.
Resumo:
This study describes an innovative monolith structure designed for applications in automotive catalysis using an advanced manufacturing approach developed at Imperial College London. The production process combines extrusion with phase inversion of a ceramic-polymer-solvent mixture in order to design highly ordered substrate micro-structures that offer improvements in performance, including reduced PGM loading, reduced catalyst ageing and reduced backpressure.
This study compares the performance of the novel substrate for CO oxidation against commercially available 400 cpsi and 900 cpsi catalysts using gas concentrations and a flow rate equivalent to those experienced by a full catalyst brick when attached to a vehicle. Due to the novel micro-structure, no washcoat was required for the initial testing and 13 g/ft3 of Pd was deposited directly throughout the substrate structure in the absence of a washcoat.
Initial results for CO oxidation indicate that the advanced micro-structure leads to enhanced conversion efficiency. Despite an 79% reduction in metal loading and the absence of a washcoat, the novel substrate sample performs well, with a light-off temperature (LOT) only 15 °C higher than the commercial 400 cpsi sample.
To test the effects of catalyst ageing on light-off temperature, each sample was aged statically at a temperature of 1000 °C, based on the Bench Ageing Time (BAT) equation. The novel substrate performed impressively when compared to the commercial samples, with a variation in light-off temperature of only 3% after 80 equivalent hours of ageing, compared to 12% and 25% for the 400 cpsi and 900 cpsi monoliths, respectively.
Resumo:
The objective of this study is to provide an alternative model approach, i.e., artificial neural network (ANN) model, to predict the compositional viscosity of binary mixtures of room temperature ionic liquids (in short as ILs) [C n-mim] [NTf 2] with n=4, 6, 8, 10 in methanol and ethanol over the entire range of molar fraction at a broad range of temperatures from T=293.0328.0K. The results show that the proposed ANN model provides alternative way to predict compositional viscosity successfully with highly improved accuracy and also show its potential to be extensively utilized to predict compositional viscosity over a wide range of temperatures and more complex viscosity compositions, i.e., more complex intermolecular interactions between components in which it would be hard or impossible to establish the analytical model. © 2010 IEEE.
Resumo:
In this work, a fault-tolerant control scheme is applied to a air handling unit of a heating, ventilation and air-conditioning system. Using the multiple-model approach it is possible to identify faults and to control the system under faulty and normal conditions in an effective way. Using well known techniques to model and control the process, this work focuses on the importance of the cost function in the fault detection and its influence on the reconfigurable controller. Experimental results show how the control of the terminal unit is affected in the presence a fault, and how the recuperation and reconfiguration of the control action is able to deal with the effects of faults.