991 resultados para variable parameters
Resumo:
A mixed-class alcohol dehydrogenase has been characterized from avian liver. Its functional properties resemble the classical class I type enzyme in livers of humans and animals by exhibiting low Km and kcat values with alcohols (Km = 0.7 mM with ethanol) and low Ki values with 4-methylpyrazole (4 microM). These values are markedly different from corresponding parameters of class II and III enzymes. In contrast, the primary structure of this avian liver alcohol dehydrogenase reveals an overall relationship closer to class II and to some extent class III (69 and 65% residue identities, respectively) than to class I or the other classes of the human alcohol dehydrogenases (52-61%), the presence of an insertion (four positions in a segment close to position 120) as in class II but in no other class of the human enzymes, and the presence of several active site residues considered typical of the class II enzyme. Hence, the avian enzyme has mixed-class properties, being functionally similar to class I, yet structurally similar to class II, with which it also clusters in phylogenetic trees of characterized vertebrate alcohol dehydrogenases. Comparisons reveal that the class II enzyme is approximately 25% more variable than the "variable" class I enzyme, which itself is more variable than the "constant" class III enzyme. The overall extreme, and the unusual chromatographic behavior may explain why the class II enzyme has previously not been found outside mammals. The properties define a consistent pattern with apparently repeated generation of novel enzyme activities after separate gene duplications.
Resumo:
Las fórmulas basadas en la teoría de la elasticidad son ampliamente utilizadas para el cálculo de asientos de cimentaciones, ya que la totalidad de la normativa geotécnica recomienda su empleo. No obstante, estos métodos no cubren todas las situaciones geotécnicamente posibles ya que frecuentemente las condiciones geológicas son complejas. En este trabajo se analiza la influencia de la presencia de una capa rígida inclinada en los asientos elásticos de una cimentación superficial. Para ello se han resuelto 273 modelos tridimensionales no lineales de elementos finitos, variando los parámetros clave del problema: la inclinación y la profundidad de la capa rígida y la rigidez de la cimentación. Finalmente, se ha realizado un análisis estadístico de los resultados de los modelos y se ha propuesto una fórmula que puede ser utilizada en el cálculo de asientos por métodos elásticos, para tener en consideración la presencia de una capa rígida inclinada en profundidad.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Two stochastic production frontier models are formulated within the generalized production function framework popularized by Zellner and Revankar (Rev. Econ. Stud. 36 (1969) 241) and Zellner and Ryu (J. Appl. Econometrics 13 (1998) 101). This framework is convenient for parsimonious modeling of a production function with returns to scale specified as a function of output. Two alternatives for introducing the stochastic inefficiency term and the stochastic error are considered. In the first the errors are added to an equation of the form h(log y, theta) = log f (x, beta) where y denotes output, x is a vector of inputs and (theta, beta) are parameters. In the second the equation h(log y,theta) = log f(x, beta) is solved for log y to yield a solution of the form log y = g[theta, log f(x, beta)] and the errors are added to this equation. The latter alternative is novel, but it is needed to preserve the usual definition of firm efficiency. The two alternative stochastic assumptions are considered in conjunction with two returns to scale functions, making a total of four models that are considered. A Bayesian framework for estimating all four models is described. The techniques are applied to USDA state-level data on agricultural output and four inputs. Posterior distributions for all parameters, for firm efficiencies and for the efficiency rankings of firms are obtained. The sensitivity of the results to the returns to scale specification and to the stochastic specification is examined. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
Visualization has proven to be a powerful and widely-applicable tool the analysis and interpretation of data. Most visualization algorithms aim to find a projection from the data space down to a two-dimensional visualization space. However, for complex data sets living in a high-dimensional space it is unlikely that a single two-dimensional projection can reveal all of the interesting structure. We therefore introduce a hierarchical visualization algorithm which allows the complete data set to be visualized at the top level, with clusters and sub-clusters of data points visualized at deeper levels. The algorithm is based on a hierarchical mixture of latent variable models, whose parameters are estimated using the expectation-maximization algorithm. We demonstrate the principle of the approach first on a toy data set, and then apply the algorithm to the visualization of a synthetic data set in 12 dimensions obtained from a simulation of multi-phase flows in oil pipelines and to data in 36 dimensions derived from satellite images.
Resumo:
The standard highway assignment model in the Florida Standard Urban Transportation Modeling Structure (FSUTMS) is based on the equilibrium traffic assignment method. This method involves running several iterations of all-or-nothing capacity-restraint assignment with an adjustment of travel time to reflect delays encountered in the associated iteration. The iterative link time adjustment process is accomplished through the Bureau of Public Roads (BPR) volume-delay equation. Since FSUTMS' traffic assignment procedure outputs daily volumes, and the input capacities are given in hourly volumes, it is necessary to convert the hourly capacities to their daily equivalents when computing the volume-to-capacity ratios used in the BPR function. The conversion is accomplished by dividing the hourly capacity by a factor called the peak-to-daily ratio, or referred to as CONFAC in FSUTMS. The ratio is computed as the highest hourly volume of a day divided by the corresponding total daily volume. ^ While several studies have indicated that CONFAC is a decreasing function of the level of congestion, a constant value is used for each facility type in the current version of FSUTMS. This ignores the different congestion level associated with each roadway and is believed to be one of the culprits of traffic assignment errors. Traffic counts data from across the state of Florida were used to calibrate CONFACs as a function of a congestion measure using the weighted least squares method. The calibrated functions were then implemented in FSUTMS through a procedure that takes advantage of the iterative nature of FSUTMS' equilibrium assignment method. ^ The assignment results based on constant and variable CONFACs were then compared against the ground counts for three selected networks. It was found that the accuracy from the two assignments was not significantly different, that the hypothesized improvement in assignment results from the variable CONFAC model was not empirically evident. It was recognized that many other factors beyond the scope and control of this study could contribute to this finding. It was recommended that further studies focus on the use of the variable CONFAC model with recalibrated parameters for the BPR function and/or with other forms of volume-delay functions. ^
Resumo:
BACKGROUND: Efficient effort expenditure to obtain rewards is critical for optimal goal-directed behavior and learning. Clinical observation suggests that individuals with autism spectrum disorders (ASD) may show dysregulated reward-based effort expenditure, but no behavioral study to date has assessed effort-based decision-making in ASD. METHODS: The current study compared a group of adults with ASD to a group of typically developing adults on the Effort Expenditure for Rewards Task (EEfRT), a behavioral measure of effort-based decision-making. In this task, participants were provided with the probability of receiving a monetary reward on a particular trial and asked to choose between either an "easy task" (less motoric effort) for a small, stable reward or a "hard task" (greater motoric effort) for a variable but consistently larger reward. RESULTS: Participants with ASD chose the hard task more frequently than did the control group, yet were less influenced by differences in reward value and probability than the control group. Additionally, effort-based decision-making was related to repetitive behavior symptoms across both groups. CONCLUSIONS: These results suggest that individuals with ASD may be more willing to expend effort to obtain a monetary reward regardless of the reward contingencies. More broadly, results suggest that behavioral choices may be less influenced by information about reward contingencies in individuals with ASD. This atypical pattern of effort-based decision-making may be relevant for understanding the heightened reward motivation for circumscribed interests in ASD.
Resumo:
Dissolution of non-aqueous phase liquids (NAPLs) or gases into groundwater is a key process, both for contamination problems originating from organic liquid sources, and for dissolution trapping in geological storage of CO2. Dissolution in natural systems typically will involve both high and low NAPL saturations and a wide range of pore water flow velocities within the same source zone for dissolution to groundwater. To correctly predict dissolution in such complex systems and as the NAPL saturations change over time, models must be capable of predicting dissolution under a range of saturations and flow conditions. To provide data to test and validate such models, an experiment was conducted in a two-dimensional sand tank, where the dissolution of a spatially variable, 5x5 cm**2 DNAPL tetrachloroethene source was carefully measured using x-ray attenuation techniques at a resolution of 0.2x0.2 cm**2. By continuously measuring the NAPL saturations, the temporal evolution of DNAPL mass loss by dissolution to groundwater could be measured at each pixel. Next, a general dissolution and solute transport code was written and several published rate-limited (RL) dissolution models and a local equilibrium (LE) approach were tested against the experimental data. It was found that none of the models could adequately predict the observed dissolution pattern, particularly in the zones of higher NAPL saturation. Combining these models with a model for NAPL pool dissolution produced qualitatively better agreement with experimental data, but the total matching error was not significantly improved. A sensitivity study of commonly used fitting parameters further showed that several combinations of these parameters could produce equally good fits to the experimental observations. The results indicate that common empirical model formulations for RL dissolution may be inadequate in complex, variable saturation NAPL source zones, and that further model developments and testing is desirable.
Resumo:
The area west of the Antarctic Peninsula is a key region for studying and understanding the history of glaciation in the southern high latitudes during the Neogene with respect to variations of the western Antarctic continental ice sheet, variable sea-ice cover, induced eustatic sea level change, as well as consequences for the global climatic system (Barker, Camerlenghi, Acton, et al., 1999). Sites 1095, 1096, and 1101 were drilled on sediment drifts forming the continental rise to examine the nature and composition of sediments deposited under the influence of the Antarctic Peninsula ice sheet, which has repeatedly advanced to the shelf edge and subsequently released glacially eroded material on the continental shelf and slope (Barker et al., 1999). Mass gravity processes on the slope are responsible for downslope sediment transport by turbidity currents within a channel system between the drifts. Furthermore, bottom currents redistribute the sediments, which leads to final build up of drift bodies (Rebesco et al., 1998). The high-resolution sedimentary sequences on the continental rise can be used to document the variability of continental glaciation and, therefore, allow us to assess the main factors that control the sediment transport and the depositional processes during glaciation periods and their relationship to glacio-eustatic sea level changes. Site 1095 lies in 3840 m of water in a distal position on the northwestern lower flank of Drift 7, whereas Site 1096 lies in 3152 m of water in a more proximal position within Drift 7. Site 1101 is located at 3509 m water depth on the northwestern flank of Drift 4. All three sites have high sedimentation rates. The oldest sediments were recovered at Site 1095 (late Miocene; 9.7 Ma), whereas sediments of Pliocene age were recovered at Site 1096 (4.7 Ma) and at Site 1101 (3.5 Ma). The purpose of this work is to provide a data set of bulk sediment parameters such as CaCO3, total organic carbon (TOC), and coarse-fraction mass percentage (>63 µm) measured on the sediments collected from the continental rise of the western Antarctic Peninsula (Holes 1095A, 1095B, 1096A, 1096B, 1096C, and 1101A). This information can be used to understand the complex depositional processes and their implication for variations in the climatic system of the western Pacific Antarctic margin since 9.7 Ma (late Miocene). Coarse-fraction particles (125-500 µm) from the late Pliocene and Pleistocene (4.0 Ma to recent) sediments recovered from Hole 1095A were microscopically analyzed to gather more detailed information about their variability and composition through time. These data can yield information about changes in potential source regions of the glacially eroded material that has been transported during repeated periods of ice-sheet movements on the shelf.
Resumo:
In this thesis, novel analog-to-digital and digital-to-analog generalized time-interleaved variable bandpass sigma-delta modulators are designed, analysed, evaluated and implemented that are suitable for high performance data conversion for a broad-spectrum of applications. These generalized time-interleaved variable bandpass sigma-delta modulators can perform noise-shaping for any centre frequency from DC to Nyquist. The proposed topologies are well-suited for Butterworth, Chebyshev, inverse-Chebyshev and elliptical filters, where designers have the flexibility of specifying the centre frequency, bandwidth as well as the passband and stopband attenuation parameters. The application of the time-interleaving approach, in combination with these bandpass loop-filters, not only overcomes the limitations that are associated with conventional and mid-band resonator-based bandpass sigma-delta modulators, but also offers an elegant means to increase the conversion bandwidth, thereby relaxing the need to use faster or higher-order sigma-delta modulators. A step-by-step design technique has been developed for the design of time-interleaved variable bandpass sigma-delta modulators. Using this technique, an assortment of lower- and higher-order single- and multi-path generalized A/D variable bandpass sigma-delta modulators were designed, evaluated and compared in terms of their signal-to-noise ratios, hardware complexity, stability, tonality and sensitivity for ideal and non-ideal topologies. Extensive behavioural-level simulations verified that one of the proposed topologies not only used fewer coefficients but also exhibited greater robustness to non-idealties. Furthermore, second-, fourth- and sixth-order single- and multi-path digital variable bandpass digital sigma-delta modulators are designed using this technique. The mathematical modelling and evaluation of tones caused by the finite wordlengths of these digital multi-path sigmadelta modulators, when excited by sinusoidal input signals, are also derived from first principles and verified using simulation and experimental results. The fourth-order digital variable-band sigma-delta modulator topologies are implemented in VHDL and synthesized on Xilinx® SpartanTM-3 Development Kit using fixed-point arithmetic. Circuit outputs were taken via RS232 connection provided on the FPGA board and evaluated using MATLAB routines developed by the author. These routines included the decimation process as well. The experiments undertaken by the author further validated the design methodology presented in the work. In addition, a novel tunable and reconfigurable second-order variable bandpass sigma-delta modulator has been designed and evaluated at the behavioural-level. This topology offers a flexible set of choices for designers and can operate either in single- or dual-mode enabling multi-band implementations on a single digital variable bandpass sigma-delta modulator. This work is also supported by a novel user-friendly design and evaluation tool that has been developed in MATLAB/Simulink that can speed-up the design, evaluation and comparison of analog and digital single-stage and time-interleaved variable bandpass sigma-delta modulators. This tool enables the user to specify the conversion type, topology, loop-filter type, path number and oversampling ratio.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Traffic demand increases are pushing aging ground transportation infrastructures to their theoretical capacity. The result of this demand is traffic bottlenecks that are a major cause of delay on urban freeways. In addition, the queues associated with those bottlenecks increase the probability of a crash while adversely affecting environmental measures such as emissions and fuel consumption. With limited resources available for network expansion, traffic professionals have developed active traffic management systems (ATMS) in an attempt to mitigate the negative consequences of traffic bottlenecks. Among these ATMS strategies, variable speed limits (VSL) and ramp metering (RM) have been gaining international interests for their potential to improve safety, mobility, and environmental measures at freeway bottlenecks. Though previous studies have shown the tremendous potential of variable speed limit (VSL) and VSL paired with ramp metering (VSLRM) control, little guidance has been developed to assist decision makers in the planning phase of a congestion mitigation project that is considering VSL or VSLRM control. To address this need, this study has developed a comprehensive decision/deployment support tool for the application of VSL and VSLRM control in recurrently congested environments. The decision tool will assist practitioners in deciding the most appropriate control strategy at a candidate site, which candidate sites have the most potential to benefit from the suggested control strategy, and how to most effectively design the field deployment of the suggested control strategy at each implementation site. To do so, the tool is comprised of three key modules, (1) Decision Module, (2) Benefits Module, and (3) Deployment Guidelines Module. Each module uses commonly known traffic flow and geometric parameters as inputs to statistical models and empirically based procedures to provide guidance on the application of VSL and VSLRM at each candidate site. These models and procedures were developed from the outputs of simulated experiments, calibrated with field data. To demonstrate the application of the tool, a list of real-world candidate sites were selected from the Maryland State Highway Administration Mobility Report. Here, field data from each candidate site was input into the tool to illustrate the step-by-step process required for efficient planning of VSL or VSLRM control. The output of the tool includes the suggested control system at each site, a ranking of the sites based on the expected benefit-to-cost ratio, and guidelines on how to deploy the VSL signs, ramp meters, and detectors at the deployment site(s). This research has the potential to assist traffic engineers in the planning of VSL and VSLRM control, thus enhancing the procedure for allocating limited resources for mobility and safety improvements on highways plagued by recurrent congestion.
Resumo:
The aims of this thesis were evaluation the type of wave channel, wave current, and effect of some parameters on them and identification and comparison between types of wave maker in laboratory situations. In this study, designing and making of two dimension channels (flume) and wave maker for experiment son the marine buoy, marine building and energy conversion systems were also investigated. In current research, the physical relation between pump and pumpage and the designing of current making in flume were evaluated. The related calculation for steel building, channels beside glasses and also equations of wave maker plate movement, power of motor and absorb wave(co astal slope) were calculated. In continue of this study, the servo motor was designed and applied for moving of wave maker’s plate. One Ball Screw Leaner was used for having better movement mechanisms of equipment and convert of the around movement to linear movement. The Programmable Logic Controller (PLC) was also used for control of wave maker system. The studies were explained type of ocean energies and energy conversion systems. In another part of this research, the systems of energy resistance in special way of Oscillating Water Column (OWC) were explained and one sample model was designed and applied in hydrolic channel at the Sheikh Bahaii building in Azad University, Science and Research Branch. The dimensions of designed flume was considered at 16 1.98 0. 57 m which had ability to provide regular waves as well as irregular waves with little changing on the control system. The ability of making waves was evaluated in our designed channel and the results were showed that all of the calculation in designed flume was correct. The mean of error between our results and theory calculation was conducted 7%, which was showed the well result in this situation. With evaluating of designed OWC model and considering of changes in the some part of system, one bigger sample of this model can be used for designing the energy conversion system model. The obtained results showed that the best form for chamber in exit position of system, were zero degree (0) in angle for moving below part, forty and five (45) degree in front wall of system and the moving forward of front wall keep in two times of height of wave.
Resumo:
The objective of this thesis is the investigation of the Mode-I fracture mechanics parameters of quasi-brittle materials to shed light onto the influence of the width and size of the specimen on the fracture response of notched beams. To further the knowledge on the fracture process, 3D digital image correlation (DIC) was employed. A new method is proposed to determine experimentally the critical value of the crack opening, which is then used to determine the size of the fracture process zone (FPZ). In addition, the Mode-I fracture mechanics parameters are compared with the Mode-II interfacial properties of composites materials that feature as matrices the quasi-brittle materials studied in Mode-I conditions. To investigate the Mode II fracture parameters, single-lap direct shear tests are performed. Notched concrete beams with six cross-sections has been tested using a three-point bending (TPB) test set-up (Mode-I fracture mechanics). Two depths and three widths of the beam are considered. In addition to concrete beams, alkali-activated mortar beams (AAMs) that differ by the type and size of the aggregates have been tested using the same TPB set-up. Two dimensions of AAMs are considered. The load-deflection response obtained from DIC is compared with the load-deflection response obtained from the readings of two linear variable displacement transformers (LVDT). Load responses, peak loads, strain profiles along the ligament from DIC, fracture energy and failure modes of TPB tests are discussed. The Mode-II problem is investigated by testing steel reinforced grout (SRG) composites bonded to masonry and concrete elements under single-lap direct shear tests. Two types of anchorage systems are proposed for SRG reinforced masonry and concrete element to study their effectiveness. An indirect method is proposed to find the interfacial properties, compare them with the Mode-I fracture properties of the matrix and to model the effect of the anchorage.
Resumo:
Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.