966 resultados para model reduction
Resumo:
Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.
Resumo:
An integrated flow and transport model using MIKE SHE/MIKE 11 software was developed to predict the flow and transport of mercury, Hg(II), under varying environmental conditions. The model analyzed the impact of remediation scenarios within the East Fork Poplar Creek watershed of the Oak Ridge Reservation with respect to downstream concentration of mercury. The numerical simulations included the entire hydrological cycle: flow in rivers, overland flow, groundwater flow in the saturated and unsaturated zones, and evapotranspiration and precipitation time series. Stochastic parameters and hydrologic conditions over a five year period of historical hydrological data were used to analyze the hydrological cycle and to determine the prevailing mercury transport mechanism within the watershed. Simulations of remediation scenarios revealed that reduction of the highly contaminated point sources, rather than general remediation of the contaminant plume, has a more direct impact on downstream mercury concentrations.
Resumo:
With the advantages and popularity of Permanent Magnet (PM) motors due to their high power density, there is an increasing incentive to use them in variety of applications including electric actuation. These applications have strict noise emission standards. The generation of audible noise and associated vibration modes are characteristics of all electric motors, it is especially problematic in low speed sensorless control rotary actuation applications using high frequency voltage injection technique. This dissertation is aimed at solving the problem of optimizing the sensorless control algorithm for low noise and vibration while achieving at least 12 bit absolute accuracy for speed and position control. The low speed sensorless algorithm is simulated using an improved Phase Variable Model, developed and implemented in a hardware-in-the-loop prototyping environment. Two experimental testbeds were developed and built to test and verify the algorithm in real time.^ A neural network based modeling approach was used to predict the audible noise due to the high frequency injected carrier signal. This model was created based on noise measurements in an especially built chamber. The developed noise model is then integrated into the high frequency based sensorless control scheme so that appropriate tradeoffs and mitigation techniques can be devised. This will improve the position estimation and control performance while keeping the noise below a certain level. Genetic algorithms were used for including the noise optimization parameters into the developed control algorithm.^ A novel wavelet based filtering approach was proposed in this dissertation for the sensorless control algorithm at low speed. This novel filter was capable of extracting the position information at low values of injection voltage where conventional filters fail. This filtering approach can be used in practice to reduce the injected voltage in sensorless control algorithm resulting in significant reduction of noise and vibration.^ Online optimization of sensorless position estimation algorithm was performed to reduce vibration and to improve the position estimation performance. The results obtained are important and represent original contributions that can be helpful in choosing optimal parameters for sensorless control algorithm in many practical applications.^
Resumo:
An emergency is a deviation from a planned course of events that endangers people, properties, or the environment. It can be described as an unexpected event that causes economic damage, destruction, and human suffering. When a disaster happens, Emergency Managers are expected to have a response plan to most likely disaster scenarios. Unlike earthquakes and terrorist attacks, a hurricane response plan can be activated ahead of time, since a hurricane is predicted at least five days before it makes landfall. This research looked into the logistics aspects of the problem, in an attempt to develop a hurricane relief distribution network model. We addressed the problem of how to efficiently and effectively deliver basic relief goods to victims of a hurricane disaster. Specifically, where to preposition State Staging Areas (SSA), which Points of Distributions (PODs) to activate, and the allocation of commodities to each POD. Previous research has addressed several of these issues, but not with the incorporation of the random behavior of the hurricane's intensity and path. This research presents a stochastic meta-model that deals with the location of SSAs and the allocation of commodities. The novelty of the model is that it treats the strength and path of the hurricane as stochastic processes, and models them as Discrete Markov Chains. The demand is also treated as stochastic parameter because it depends on the stochastic behavior of the hurricane. However, for the meta-model, the demand is an input that is determined using Hazards United States (HAZUS), a software developed by the Federal Emergency Management Agency (FEMA) that estimates losses due to hurricanes and floods. A solution heuristic has been developed based on simulated annealing. Since the meta-model is a multi-objective problem, the heuristic is a multi-objective simulated annealing (MOSA), in which the initial solution and the cooling rate were determined via a Design of Experiments. The experiment showed that the initial temperature (T0) is irrelevant, but temperature reduction (δ) must be very gradual. Assessment of the meta-model indicates that the Markov Chains performed as well or better than forecasts made by the National Hurricane Center (NHC). Tests of the MOSA showed that it provides solutions in an efficient manner. Thus, an illustrative example shows that the meta-model is practical.
Resumo:
Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as ƒ-test is performed during each node's split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.
Resumo:
An integrated flow and transport model using MIKE SHE/MIKE 11 software was developed to predict the flow and transport of mercury, Hg(II), under varying environmental conditions. The model analyzed the impact of remediation scenarios within the East Fork Poplar Creek watershed of the Oak Ridge Reservation with respect to downstream concentration of mercury. The numerical simulations included the entire hydrological cycle: flow in rivers, overland flow, groundwater flow in the saturated and unsaturated zones, and evapotranspiration and precipitation time series. Stochastic parameters and hydrologic conditions over a five year period of historical hydrological data were used to analyze the hydrological cycle and to determine the prevailing mercury transport mechanism within the watershed. Simulations of remediation scenarios revealed that reduction of the highly contaminated point sources, rather than general remediation of the contaminant plume, has a more direct impact on downstream mercury concentrations.
Resumo:
Purpose: The purpose of this work was to investigate the breast dose saving potential of a breast positioning technique (BP) for thoracic CT examinations with organ-based tube current modulation (OTCM).
Methods: The study included 13 female patient models (XCAT, age range: 27-65 y.o., weight range: 52 to 105.8 kg). Each model was modified to simulate three breast sizes in standard supine geometry. The modeled breasts were further deformed, emulating a BP that would constrain the breasts within 120° anterior tube current (mA) reduction zone. The tube current value of the CT examination was modeled using an attenuation-based program, which reduces the radiation dose to 20% in the anterior region with a corresponding increase to the posterior region. A validated Monte Carlo program was used to estimate organ doses with a typical clinical system (SOMATOM Definition Flash, Siemens Healthcare). The simulated organ doses and organ doses normalized by CTDIvol were compared between attenuation-based tube current modulation (ATCM), OTCM, and OTCM with BP (OTCMBP).
Results: On average, compared to ATCM, OTCM reduced the breast dose by 19.3±4.5%, whereas OTCMBP reduced breast dose by 36.6±6.9% (an additional 21.3±7.3%). The dose saving of OTCMBP was more significant for larger breasts (on average 32, 38, and 44% reduction for 0.5, 1.5, and 2.5 kg breasts, respectively). Compared to ATCM, OTCMBP also reduced thymus and heart dose by 12.1 ± 6.3% and 13.1 ± 5.4%, respectively.
Conclusions: In thoracic CT examinations, OTCM with a breast positioning technique can markedly reduce unnecessary exposure to the radiosensitive organs in the anterior chest wall, specifically breast tissue. The breast dose reduction is more notable for women with larger breasts.
Resumo:
Autism spectrum disorder (ASD) is a complex heterogeneous neurodevelopmental disorder characterized by alterations in social functioning, communicative abilities, and engagement in repetitive or restrictive behaviors. The process of aging in individuals with autism and related neurodevelopmental disorders is not well understood, despite the fact that the number of individuals with ASD aged 65 and older is projected to increase by over half a million individuals in the next 20 years. To elucidate the effects of aging in the context of a modified central nervous system, we investigated the effects of age on the BTBR T + tf/j mouse, a well characterized and widely used mouse model that displays an ASD-like phenotype. We found that a reduction in social behavior persists into old age in male BTBR T + tf/j mice. We employed quantitative proteomics to discover potential alterations in signaling systems that could regulate aging in the BTBR mice. Unbiased proteomic analysis of hippocampal and cortical tissue of BTBR mice compared to age-matched wild-type controls revealed a significant decrease in brain derived neurotrophic factor and significant increases in multiple synaptic markers (spinophilin, Synapsin I, PSD 95, NeuN), as well as distinct changes in functional pathways related to these proteins, including "Neural synaptic plasticity regulation" and "Neurotransmitter secretion regulation." Taken together, these results contribute to our understanding of the effects of aging on an ASD-like mouse model in regards to both behavior and protein alterations, though additional studies are needed to fully understand the complex interplay underlying aging in mouse models displaying an ASD-like phenotype.
Resumo:
Prior work of our research group, that quantified the alarming levels of radiation dose to patients with Crohn’s disease from medical imaging and the notable shift towards CT imaging making these patients an at risk group, provided context for this work. CT delivers some of the highest doses of ionising radiation in diagnostic radiology. Once a medical imaging examination is deemed justified, there is an onus on the imaging team to endeavour to produce diagnostic quality CT images at the lowest possible radiation dose to that patient. The fundamental limitation with conventional CT raw data reconstruction was the inherent coupling of administered radiation dose with observed image noise – the lower the radiation dose, the noisier the image. The renaissance, rediscovery and refinement of iterative reconstruction removes this limitation allowing either an improvement in image quality without increasing radiation dose or maintenance of image quality at a lower radiation dose compared with traditional image reconstruction. This thesis is fundamentally an exercise in optimisation in clinical CT practice with the objectives of assessment of iterative reconstruction as a method for improvement of image quality in CT, exploration of the associated potential for radiation dose reduction, and development of a new split dose CT protocol with the aim of achieving and validating diagnostic quality submillisiever t CT imaging in patients with Crohn’s disease. In this study, we investigated the interplay of user-selected parameters on radiation dose and image quality in phantoms and cadavers, comparing traditional filtered back projection (FBP) with iterative reconstruction algorithms. This resulted in the development of an optimised, refined and appropriate split dose protocol for CT of the abdomen and pelvis in clinical patients with Crohn’s disease allowing contemporaneous acquisition of both modified and conventional dose CT studies. This novel algorithm was then applied to 50 patients with a suspected acute complication of known Crohn’s disease and the raw data reconstructed with FBP, adaptive statistical iterative reconstruction (ASiR) and model based iterative reconstruction (MBIR). Conventional dose CT images with FBP reconstruction were used as the reference standard with which the modified dose CT images were compared in terms of radiation dose, diagnostic findings and image quality indices. As there are multiple possible user-selected strengths of ASiR available, these were compared in terms of image quality to determine the optimal strength for this modified dose CT protocol. Modified dose CT images with MBIR were also compared with contemporaneous abdominal radiograph, where performed, in terms of diagnostic yield and radiation dose. Finally, attenuation measurements in organs, tissues, etc. with each reconstruction algorithm were compared to assess for preservation of tissue characterisation capabilities. In the phantom and cadaveric models, both forms of iterative reconstruction examined (ASiR and MBIR) were superior to FBP across a wide variety of imaging protocols, with MBIR superior to ASiR in all areas other than reconstruction speed. We established that ASiR appears to work to a target percentage noise reduction whilst MBIR works to a target residual level of absolute noise in the image. Modified dose CT images reconstructed with both ASiR and MBIR were non-inferior to conventional dose CT with FBP in terms of diagnostic findings, despite reduced subjective and objective indices of image quality. Mean dose reductions of 72.9-73.5% were achieved with the modified dose protocol with a mean effective dose of 1.26mSv. MBIR was again demonstrated superior to ASiR in terms of image quality. The overall optimal ASiR strength for the modified dose protocol used in this work is ASiR 80%, as this provides the most favourable balance of peak subjective image quality indices with less objective image noise than the corresponding conventional dose CT images reconstructed with FBP. Despite guidelines to the contrary, abdominal radiographs are still often used in the initial imaging of patients with a suspected complication of Crohn’s disease. We confirmed the superiority of modified dose CT with MBIR over abdominal radiographs at comparable doses in detection of Crohn’s disease and non-Crohn’s disease related findings. Finally, we demonstrated (in phantoms, cadavers and in vivo) that attenuation values do not change significantly across reconstruction algorithms meaning preserved tissue characterisation capabilities with iterative reconstruction. Both adaptive statistical and model based iterative reconstruction algorithms represent feasible methods of facilitating acquisition diagnostic quality CT images of the abdomen and pelvis in patients with Crohn’s disease at markedly reduced radiation doses. Our modified dose CT protocol allows dose savings of up to 73.5% compared with conventional dose CT, meaning submillisievert imaging is possible in many of these patients.
Resumo:
The early Pliocene warm phase was characterized by high sea surface temperatures and a deep thermocline in the eastern equatorial Pacific. A new hypothesis suggests that the progressive closure of the Panamanian seaway contributed substantially to the termination of this zonally symmetric state in the equatorial Pacific. According to this hypothesis, intensification of the Atlantic meridional overturning circulation (AMOC) - induced by the closure of the gateway - was the principal cause of equatorial Pacific thermocline shoaling during the Pliocene. In this study, twelve Panama seaway sensitivity experiments from eight ocean/climate models of different complexity are analyzed to examine the effect of an open gateway on AMOC strength and thermocline depth. All models show an eastward Panamanian net throughflow, leading to a reduction in AMOC strength compared to the corresponding closed-Panama case. In those models that do not include a dynamic atmosphere, deepening of the equatorial Pacific thermocline appears to scale almost linearly with the throughflow-induced reduction in AMOC strength. Models with dynamic atmosphere do not follow this simple relation. There are indications that in four out of five models equatorial wind-stress anomalies amplify the tropical Pacific thermocline deepening. In summary, the models provide strong support for the hypothesized relationship between Panama closure and equatorial Pacific thermocline shoaling.
Resumo:
The development of a permanent, stable ice sheet in East Antarctica happened during the middle Miocene, about 14 million years (Myr) ago. The middle Miocene therefore represents one of the distinct phases of rapid change in the transition from the "greenhouse" of the early Eocene to the "icehouse" of the present day. Carbonate carbon isotope records of the period immediately following the main stage of ice sheet development reveal a major perturbation in the carbon system, represented by the positive d13C excursion known as carbon maximum 6 ("M6"), which has traditionally been interpreted as reflecting increased burial of organic matter and atmospheric pCO2 drawdown. More recently, it has been suggested that the d13C excursion records a negative feedback resulting from the reduction of silicate weathering and an increase in atmospheric pCO2. Here we present high-resolution multi-proxy (alkenone carbon and foraminiferal boron isotope) records of atmospheric carbon dioxide and sea surface temperature across CM6. Similar to previously published records spanning this interval, our records document a world of generally low (~300 ppm) atmospheric pCO2 at a time generally accepted to be much warmer than today. Crucially, they also reveal a pCO2 decrease with associated cooling, which demonstrates that the carbon burial hypothesis for CM6 is feasible and could have acted as a positive feedback on global cooling.
Resumo:
Bitumen extraction from surface-mined oil sands results in the production of large volumes of Fluid Fine Tailings (FFT). Through Directive 085, the Province of Alberta has signaled that oil sands operators must improve and accelerate the methods by which they deal with FFT production, storage and treatment. This thesis aims to develop an enhanced method to forecast FFT production based on specific ore characteristics. A mass relationship and mathematical model to modify the Forecasting Tailings Model (FTM) by using fines and clay boundaries, as the two main indicators in FFT accumulation, has been developed. The modified FTM has been applied on representative block model data from an operating oil sands mining venture. An attempt has been made to identify order-of-magnitude associated tailings treatment costs, and to improve financial performance by not processing materials that have ultimate ore processing and tailings storage and treatment costs in excess of the value of bitumen they produce. The results on the real case study show that there is a 53% reduction in total tailings accumulations over the mine life by selectively processing only lower tailings generating materials through eliminating 15% of total mined ore materials with higher potential of fluid fines inventory. This significant result will assess the impact of Directive 082 on mining project economic and environmental performance towards the sustainable development of mining projects.
Resumo:
This study examines the business model complexity of Irish credit unions using a latent class approach to measure structural performance over the period 2002 to 2013. The latent class approach allows the endogenous identification of a multi-class framework for business models based on credit union specific characteristics. The analysis finds a three class system to be appropriate with the multi-class model dependent on three financial viability characteristics. This finding is consistent with the deliberations of the Irish Commission on Credit Unions (2012) which identified complexity and diversity in the business models of Irish credit unions and recommended that such complexity and diversity could not be accommodated within a one size fits all regulatory framework. The analysis also highlights that two of the classes are subject to diseconomies of scale. This may suggest credit unions would benefit from a reduction in scale or perhaps that there is an imbalance in the present change process. Finally, relative performance differences are identified for each class in terms of technical efficiency. This suggests that there is an opportunity for credit unions to improve their performance by using within-class best practice or alternatively by switching to another class.
Resumo:
Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street Pollution Model (OSPMr). To assess the predictive validity of the model, the data is split into an estimation and a prediction data set using two data splitting approaches and data preparation techniques (clustering and outlier detection) are analysed. The sensitivity analysis, being part of the identifiability analysis, showed that some model parameters were significantly more sensitive than others. The application of the determined optimal parameter values was shown to succesfully equilibrate the model biases among the individual streets and species. It was as well shown that the frequentist approach applied for the uncertainty calculations underestimated the parameter uncertainties. The model parameter uncertainty was qualitatively assessed to be significant, and reduction strategies were identified.