855 resultados para Travel Cost Method
Resumo:
Reliability has emerged as a critical design constraint especially in memories. Designers are going to great lengths to guarantee fault free operation of the underlying silicon by adopting redundancy-based techniques, which essentially try to detect and correct every single error. However, such techniques come at a cost of large area, power and performance overheads which making many researchers to doubt their efficiency especially for error resilient systems where 100% accuracy is not always required. In this paper, we present an alternative method focusing on the confinement of the resulting output error induced by any reliability issues. By focusing on memory faults, rather than correcting every single error the proposed method exploits the statistical characteristics of any target application and replaces any erroneous data with the best available estimate of that data. To realize the proposed method a RISC processor is augmented with custom instructions and special-purpose functional units. We apply the method on the proposed enhanced processor by studying the statistical characteristics of the various algorithms involved in a popular multimedia application. Our experimental results show that in contrast to state-of-the-art fault tolerance approaches, we are able to reduce runtime and area overhead by 71.3% and 83.3% respectively.
Resumo:
This paper presents a study on the implementation of Real-Time Pricing (RTP) based Demand Side Management (DSM) of water pumping at a clean water pumping station in Northern Ireland, with the intention of minimising electricity costs and maximising the usage of electricity from wind generation. A Genetic Algorithm (GA) was used to create pumping schedules based on system constraints and electricity tariff scenarios. Implementation of this method would allow the water network operator to make significant savings on electricity costs while also helping to mitigate the variability of wind generation.
Resumo:
Variations are inherent in all manufacturing processes and can significantly affect the quality of a final assembly, particularly in multistage assembly systems. Existing research in variation management has primarily focused on incorporating GD&T factors into variation propagation models in order to predict product quality and allocate tolerances. However, process induced variation, which has a key influence on process planning, has not been fully studied. Furthermore, the link between variation and cost has not been well established, in particular the effect that assembly process selection has on the final quality and cost of a product. To overcome these barriers, this paper proposes a novel method utilizing process capabilities to establish the relationship between variation and cost. The methodology is discussed using a real industrial case study. The benefits include determining the optimum configuration of an assembly system and facilitating rapid introduction of novel assembly techniques to achieve a competitive edge.
Resumo:
A novel surrogate model is proposed in lieu of computational fluid dynamic (CFD) code for fast nonlinear aerodynamic modeling. First, a nonlinear function is identified on selected interpolation points defined by discrete empirical interpolation method (DEIM). The flow field is then reconstructed by a least square approximation of flow modes extracted by proper orthogonal decomposition (POD). The proposed model is applied in the prediction of limit cycle oscillation for a plunge/pitch airfoil and a delta wing with linear structural model, results are validate against a time accurate CFD-FEM code. The results show the model is able to replicate the aerodynamic forces and flow fields with sufficient accuracy while requiring a fraction of CFD cost.
Resumo:
This paper presents a study on the implementation of Real-Time Pricing (RTP) based Demand Side Management (DSM) of water pumping at a clean water pumping station in Northern Ireland, with the intention of minimising electricity costs and maximising the usage of electricity from wind generation. A Genetic Algorithm (GA) was used to create pumping schedules based on system constraints and electricity tariff scenarios. Implementation of this method would allow the water network operator to make significant savings on electricity costs while also helping to mitigate the variability of wind generation.
Resumo:
This paper presents a study on the implementation of Real-Time Pricing (RTP) based Demand Side Management (DSM) of water pumping at a clean water pumping station in Northern Ireland, with the intention of minimising electricity costs and maximising the usage of electricity from wind generation. A Genetic Algorithm (GA) was used to create pumping schedules based on system constraints and electricity tariff scenarios. Implementation of this method would allow the water network operator to make significant savings on electricity costs while also helping to mitigate the variability of wind generation.
Resumo:
Surface flow types (SFT) are advocated as ecologically relevant hydraulic units, often mapped visually from the bankside to characterise rapidly the physical habitat of rivers. SFT mapping is simple, non-invasive and cost-efficient. However, it is also qualitative, subjective and plagued by difficulties in recording accurately the spatial extent of SFT units. Quantitative validation of the underlying physical habitat parameters is often lacking, and does not consistently differentiate between SFTs. Here, we investigate explicitly the accuracy, reliability and statistical separability of traditionally mapped SFTs as indicators of physical habitat, using independent, hydraulic and topographic data collected during three surveys of a c. 50m reach of the River Arrow, Warwickshire, England. We also explore the potential of a novel remote sensing approach, comprising a small unmanned aerial system (sUAS) and Structure-from-Motion photogrammetry (SfM), as an alternative method of physical habitat characterisation. Our key findings indicate that SFT mapping accuracy is highly variable, with overall mapping accuracy not exceeding 74%. Results from analysis of similarity (ANOSIM) tests found that strong differences did not exist between all SFT pairs. This leads us to question the suitability of SFTs for characterising physical habitat for river science and management applications. In contrast, the sUAS-SfM approach provided high resolution, spatially continuous, spatially explicit, quantitative measurements of water depth and point cloud roughness at the microscale (spatial scales ≤1m). Such data are acquired rapidly, inexpensively, and provide new opportunities for examining the heterogeneity of physical habitat over a range of spatial and temporal scales. Whilst continued refinement of the sUAS-SfM approach is required, we propose that this method offers an opportunity to move away from broad, mesoscale classifications of physical habitat (spatial scales 10-100m), and towards continuous, quantitative measurements of the continuum of hydraulic and geomorphic conditions which actually exists at the microscale.
Resumo:
Abstract: Highway bridges have great values in a country because in case of any natural disaster they may serve as lines to save people’s lives. Being vulnerable under significant seismic loads, different methods can be considered to design resistant highway bridges and rehabilitate the existing ones. In this study, base isolation has been considered as one efficient method in this regards which in some cases reduces significantly the seismic load effects on the structure. By reducing the ductility demand on the structure without a notable increase of strength, the structure is designed to remain elastic under seismic loads. The problem associated with the isolated bridges, especially with elastomeric bearings, can be their excessive displacements under service and seismic loads. This can defy the purpose of using elastomeric bearings for small to medium span typical bridges where expansion joints and clearances may result in significant increase of initial and maintenance cost. Thus, supplementing the structure with dampers with some stiffness can serve as a solution which in turn, however, may increase the structure base shear. The main objective of this thesis is to provide a simplified method for the evaluation of optimal parameters for dampers in isolated bridges. Firstly, performing a parametric study, some directions are given for the use of simple isolation devices such as elastomeric bearings to rehabilitate existing bridges with high importance. Parameters like geometry of the bridge, code provisions and the type of soil on which the structure is constructed have been introduced to a typical two span bridge. It is concluded that the stiffness of the substructure, soil type and special provisions in the code can determine the employment of base isolation for retrofitting of bridges. Secondly, based on the elastic response coefficient of isolated bridges, a simplified design method of dampers for seismically isolated regular highway bridges has been presented in this study. By setting objectives for reduction of displacement and base shear variation, the required stiffness and damping of a hysteretic damper can be determined. By modelling a typical two span bridge, numerical analyses have followed to verify the effectiveness of the method. The method has been used to identify equivalent linear parameters and subsequently, nonlinear parameters of hysteretic damper for various designated scenarios of displacement and base shear requirements. Comparison of the results of the nonlinear numerical model without damper and with damper has shown that the method is sufficiently accurate. Finally, an innovative and simple hysteretic steel damper was designed. Five specimens were fabricated from two steel grades and were tested accompanying a real scale elastomeric isolator in the structural laboratory of the Université de Sherbrooke. The test procedure was to characterize the specimens by cyclic displacement controlled tests and subsequently to test them by real-time dynamic substructuring (RTDS) method. The test results were then used to establish a numerical model of the system which went through nonlinear time history analyses under several earthquakes. The outcome of the experimental and numerical showed an acceptable conformity with the simplified method.
Resumo:
The purpose of this study is to establish whether coaches from a multi-sport context develop most effectively through coach education programmes and whether formal learning is fostering coach effectiveness. A sample of eight qualified male multi-sports’ coaches participated with an age range of 24 to 52 years (M = 32.6, ± = 8.9) and 9 to 18 years coaching experience (M = 12.6, ± = 3.8). Qualitative semi structured interviews were employed, lasting approximately 30 to 60 minutes. The data then underwent a thematic analysis process reducing the data into six overarching themes: values of the coach; the coach’s role on athlete development; forms of learning; barriers regarding coach education; role of governing bodies; coaches career pathway. The findings of the study indicated coaches access a wide range of sources to enhance their practice, but informal learning was preferred (interacting with other coaches and learning by doing). This resulted from numerous barriers experienced surrounding the delivery, cost and access to coach education programmes preventing coaches from progressing through the pathway. However, coaches in the study feel coach education should be a mandatory process for every coach. The findings have implications for policymakers and sport organisations in developing their coach education structure.
Resumo:
Introduction Compounds exhibiting antioxidant activity have received much interest in the food industry because of their potential health benefits. Carotenoids such as lycopene, which in the human diet mainly derives from tomatoes (Solanum lycopersicum), have attracted much attention in this aspect and the study of their extraction, processing and storage procedures is of importance. Optical techniques potentially offer advantageous non-invasive and specific methods to monitor them. Objectives To obtain both fluorescence and Raman information to ascertain if ultrasound assisted extraction from tomato pulp has a detrimental effect on lycopene. Method Use of time-resolved fluorescence spectroscopy to monitor carotenoids in a hexane extract obtained from tomato pulp with application of ultrasound treatment (583 kHz). The resultant spectra were a combination of scattering and fluorescence. Because of their different timescales, decay associated spectra could be used to separate fluorescence and Raman information. This simultaneous acquisition of two complementary techniques was coupled with a very high time-resolution fluorescence lifetime measurement of the lycopene. Results Spectroscopic data showed the presence of phytofluene and chlorophyll in addition to lycopene in the tomato extract. The time-resolved spectral measurement containing both fluorescence and Raman data, coupled with high resolution time-resolved measurements, where a lifetime of ~5 ps was attributed to lycopene, indicated lycopene appeared unaltered by ultrasound treatment. Detrimental changes were, however, observed in both chlorophyll and phytofluene contributions. Conclusion Extracted lycopene appeared unaffected by ultrasound treatment, while other constituents (chlorophyll and phytofluene) were degraded.
Resumo:
Travel demand models are important tools used in the analysis of transportation plans, projects, and policies. The modeling results are useful for transportation planners making transportation decisions and for policy makers developing transportation policies. Defining the level of detail (i.e., the number of roads) of the transport network in consistency with the travel demand model’s zone system is crucial to the accuracy of modeling results. However, travel demand modelers have not had tools to determine how much detail is needed in a transport network for a travel demand model. This dissertation seeks to fill this knowledge gap by (1) providing methodology to define an appropriate level of detail for a transport network in a given travel demand model; (2) implementing this methodology in a travel demand model in the Baltimore area; and (3) identifying how this methodology improves the modeling accuracy. All analyses identify the spatial resolution of the transport network has great impacts on the modeling results. For example, when compared to the observed traffic data, a very detailed network underestimates traffic congestion in the Baltimore area, while a network developed by this dissertation provides a more accurate modeling result of the traffic conditions. Through the evaluation of the impacts a new transportation project has on both networks, the differences in their analysis results point out the importance of having an appropriate level of network detail for making improved planning decisions. The results corroborate a suggested guideline concerning the development of a transport network in consistency with the travel demand model’s zone system. To conclude this dissertation, limitations are identified in data sources and methodology, based on which a plan of future studies is laid out.
Resumo:
Recent developments in automation, robotics and artificial intelligence have given a push to a wider usage of these technologies in recent years, and nowadays, driverless transport systems are already state-of-the-art on certain legs of transportation. This has given a push for the maritime industry to join the advancement. The case organisation, AAWA initiative, is a joint industry-academia research consortium with the objective of developing readiness for the first commercial autonomous solutions, exploiting state-of-the-art autonomous and remote technology. The initiative develops both autonomous and remote operation technology for navigation, machinery, and all on-board operating systems. The aim of this study is to develop a model with which to estimate and forecast the operational costs, and thus enable comparisons between manned and autonomous cargo vessels. The building process of the model is also described and discussed. Furthermore, the model’s aim is to track and identify the critical success factors of the chosen ship design, and to enable monitoring and tracking of the incurred operational costs as the life cycle of the vessel progresses. The study adopts the constructive research approach, as the aim is to develop a construct to meet the needs of a case organisation. Data has been collected through discussions and meeting with consortium members and researchers, as well as through written and internal communications material. The model itself is built using activity-based life cycle costing, which enables both realistic cost estimation and forecasting, as well as the identification of critical success factors due to the process-orientation adopted from activity-based costing and the statistical nature of Monte Carlo simulation techniques. As the model was able to meet the multiple aims set for it, and the case organisation was satisfied with it, it could be argued that activity-based life cycle costing is the method with which to conduct cost estimation and forecasting in the case of autonomous cargo vessels. The model was able to perform the cost analysis and forecasting, as well as to trace the critical success factors. Later on, it also enabled, albeit hypothetically, monitoring and tracking of the incurred costs. By collecting costs this way, it was argued that the activity-based LCC model is able facilitate learning from and continuous improvement of the autonomous vessel. As with the building process of the model, an individual approach was chosen, while still using the implementation and model building steps presented in existing literature. This was due to two factors: the nature of the model and – perhaps even more importantly – the nature of the case organisation. Furthermore, the loosely organised network structure means that knowing the case organisation and its aims is of great importance when conducting a constructive research.
Resumo:
[EN] 3D microfluidic device fabrication methods are normally quite expensive and tedious. In this paper, we present an easy and cheap alternative wherein thin cyclic olefin polymer (COP) sheets and pressure sensitive adhesive(PSA) were used to fabricate hybrid 3D microfluidic structures, by the Origami technique, which enables the fabrication of microfluidic devices without the need of any alignment tool. The COP and PSA layers were both cut simultaneously using a portable, low-cost plotter allowing for rapid prototyping of a large variety of designs in a single production step. The devices were then manually assembled using the Origami technique by simply combining COP and PSA layers and mild pressure. This fast fabrication method was applied, as proof of concept, to the generation of a micromixer with a 3D-stepped serpentine design made of ten layers in less than 8 min. Moreover, the micromixer was characterized as a function of its pressure failure, achieving pressures of up to 1000 mbar. This fabrication method is readily accessible across a large range of potential end users, such as educational agencies (schools,universities), low-income/developing world research and industry or any laboratory without access to clean room facilities, enabling the fabrication of robust, reproducible microfluidic devices.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
Background: The nitration of tyrosine residues in proteins is associated with nitrosative stress, resulting in the formation of 3-nitrotyrosine (3-NT). 3-NT levels in biological samples have been associated with numerous physiological and pathological conditions. For this reason, several attempts have been made in order to develop methods that accurately quantify 3-NT in biological samples. Regarding chromatographic methods, they seem to be very accurate, showing very good sensibility and specificity. However, accurate quantification of this molecule, which is present at very low concentrations both at physiological and pathological states, is always a complex task and a target of intense research. Objectives: We aimed to develop a simple, rapid, low-cost and sensitive 3-NT quantification method for use in medical laboratories as an additional tool for diagnosis and/or treatment monitoring of a wide range of pathologies. We also aimed to evaluate the performance of the HPLC-based method developed here in a wide range of biological matrices. Material and methods: All experiments were performed on a Hitachi LaChrom Elite® HPLC system and separation was carried out using a Lichrocart® 250-4 Lichrospher 100 RP-18 (5μm) column. The method was further validated according to ICH guidelines. The biological matrices tested were serum, whole blood, urine, B16 F-10 melanoma cell line, growth medium conditioned with the same cell line, bacterial and yeast suspensions. Results: From all the protocols tested, the best results were obtained using 0.5% CH3COOH:MeOH:H2O (15:15:70) as the mobile phase, with detection at wavelengths 215, 276 and 356 nm, at 25ºC, and using a flow rate of 1 mL/min. By using this protocol, it was possible to obtain a linear calibration curve (correlation coefficient = 1), limits of detection and quantification in the order of ng/mL, and a short analysis time (<15 minutes per sample). Additionally, the developed protocol allowed the successful detection and quantification of 3-NT in all biological matrices tested, with detection at 356 nm. Conclusion: The method described in this study, which was successfully developed and validated for 3-NT quantification, is simple, cheap and fast, rendering it suitable for analysis in a wide range of biological matrices.