908 resultados para variable costing
Resumo:
Dissolution of non-aqueous phase liquids (NAPLs) or gases into groundwater is a key process, both for contamination problems originating from organic liquid sources, and for dissolution trapping in geological storage of CO2. Dissolution in natural systems typically will involve both high and low NAPL saturations and a wide range of pore water flow velocities within the same source zone for dissolution to groundwater. To correctly predict dissolution in such complex systems and as the NAPL saturations change over time, models must be capable of predicting dissolution under a range of saturations and flow conditions. To provide data to test and validate such models, an experiment was conducted in a two-dimensional sand tank, where the dissolution of a spatially variable, 5x5 cm**2 DNAPL tetrachloroethene source was carefully measured using x-ray attenuation techniques at a resolution of 0.2x0.2 cm**2. By continuously measuring the NAPL saturations, the temporal evolution of DNAPL mass loss by dissolution to groundwater could be measured at each pixel. Next, a general dissolution and solute transport code was written and several published rate-limited (RL) dissolution models and a local equilibrium (LE) approach were tested against the experimental data. It was found that none of the models could adequately predict the observed dissolution pattern, particularly in the zones of higher NAPL saturation. Combining these models with a model for NAPL pool dissolution produced qualitatively better agreement with experimental data, but the total matching error was not significantly improved. A sensitivity study of commonly used fitting parameters further showed that several combinations of these parameters could produce equally good fits to the experimental observations. The results indicate that common empirical model formulations for RL dissolution may be inadequate in complex, variable saturation NAPL source zones, and that further model developments and testing is desirable.
Resumo:
A CMOS vector-sum phase shifter covering the full 360° range is presented in this paper. Broadband operational transconductance amplifiers with variable transconductance provide coarse scaling of the quadrature vector amplitudes. Fine scaling of the amplitudes is accomplished using a passive resistive network. Expressions are derived to predict the maximum bit resolution of the phase shifter from the scaling factor of the coarse and fine vector-scaling stages. The phase shifter was designed and fabricated using the standard 130-nm CMOS process and was tested on-wafer over the frequency range of 4.9–5.9 GHz. The phase shifter delivers root mean square (rms) phase and amplitude errors of 1.25° and 0.7 dB, respectively, at the midband frequency of 5.4 GHz. The input and output return losses are both below 17 dB over the band, and the insertion loss is better than 4 dB over the band. The circuit uses an area of 0.303 mm2 excluding bonding pads and draws 28 mW from a 1.2 V supply.
Resumo:
In this thesis, novel analog-to-digital and digital-to-analog generalized time-interleaved variable bandpass sigma-delta modulators are designed, analysed, evaluated and implemented that are suitable for high performance data conversion for a broad-spectrum of applications. These generalized time-interleaved variable bandpass sigma-delta modulators can perform noise-shaping for any centre frequency from DC to Nyquist. The proposed topologies are well-suited for Butterworth, Chebyshev, inverse-Chebyshev and elliptical filters, where designers have the flexibility of specifying the centre frequency, bandwidth as well as the passband and stopband attenuation parameters. The application of the time-interleaving approach, in combination with these bandpass loop-filters, not only overcomes the limitations that are associated with conventional and mid-band resonator-based bandpass sigma-delta modulators, but also offers an elegant means to increase the conversion bandwidth, thereby relaxing the need to use faster or higher-order sigma-delta modulators. A step-by-step design technique has been developed for the design of time-interleaved variable bandpass sigma-delta modulators. Using this technique, an assortment of lower- and higher-order single- and multi-path generalized A/D variable bandpass sigma-delta modulators were designed, evaluated and compared in terms of their signal-to-noise ratios, hardware complexity, stability, tonality and sensitivity for ideal and non-ideal topologies. Extensive behavioural-level simulations verified that one of the proposed topologies not only used fewer coefficients but also exhibited greater robustness to non-idealties. Furthermore, second-, fourth- and sixth-order single- and multi-path digital variable bandpass digital sigma-delta modulators are designed using this technique. The mathematical modelling and evaluation of tones caused by the finite wordlengths of these digital multi-path sigmadelta modulators, when excited by sinusoidal input signals, are also derived from first principles and verified using simulation and experimental results. The fourth-order digital variable-band sigma-delta modulator topologies are implemented in VHDL and synthesized on Xilinx® SpartanTM-3 Development Kit using fixed-point arithmetic. Circuit outputs were taken via RS232 connection provided on the FPGA board and evaluated using MATLAB routines developed by the author. These routines included the decimation process as well. The experiments undertaken by the author further validated the design methodology presented in the work. In addition, a novel tunable and reconfigurable second-order variable bandpass sigma-delta modulator has been designed and evaluated at the behavioural-level. This topology offers a flexible set of choices for designers and can operate either in single- or dual-mode enabling multi-band implementations on a single digital variable bandpass sigma-delta modulator. This work is also supported by a novel user-friendly design and evaluation tool that has been developed in MATLAB/Simulink that can speed-up the design, evaluation and comparison of analog and digital single-stage and time-interleaved variable bandpass sigma-delta modulators. This tool enables the user to specify the conversion type, topology, loop-filter type, path number and oversampling ratio.
Resumo:
The schistosome blood flukes are some of the largest global causes of parasitic morbidity. Further study of the specific antibody response during schistosomiasis may yield the vaccines and diagnostics needed to combat this disease. Therefore, for the purposes of antigen discovery, sera and antibody-secreting cell (ASC) probes from semi-permissive rats and sera from susceptible mice were used to screen a schistosome protein microarray. Following Schistosoma japonicum infection, rats had reduced pathology, increased antibody responses and broader antigen recognition profiles compared with mice. With successive infections, rat global serological reactivity and the number of recognized antigens increased. The local antibody response in rat skin and lung, measured with ASC probes, increased after parasite migration and contributed antigen-specific antibodies to the multivalent serological response. In addition, the temporal variation of anti-parasite serum antibodies after infection and reinfection followed patterns that appear related to the antigen driving the response. Among the 29 antigens differentially recognized by the infected hosts were numerous known vaccine candidates, drug targets and several S. japonicum homologs of human schistosomiasis resistance markers-the tegument allergen-like proteins. From this set, we prioritized eight proteins that may prove to be novel schistosome vaccine and diagnostic antigens.
Resumo:
We performed an immunogenetic analysis of 345 IGHV-IGHD-IGHJ rearrangements from 337 cases with primary splenic small B-cell lymphomas of marginal-zone origin. Three immunoglobulin (IG) heavy variable (IGHV) genes accounted for 45.8% of the cases (IGHV1-2, 24.9%; IGHV4-34, 12.8%; IGHV3-23, 8.1%). Particularly for the IGHV1-2 gene, strong biases were evident regarding utilization of different alleles, with 79/86 rearrangements (92%) using allele (*)04. Among cases more stringently classified as splenic marginal-zone lymphoma (SMZL) thanks to the availability of splenic histopathological specimens, the frequency of IGHV1-2(*)04 peaked at 31%. The IGHV1-2(*)04 rearrangements carried significantly longer complementarity-determining region-3 (CDR3) than all other cases and showed biased IGHD gene usage, leading to CDR3s with common motifs. The great majority of analyzed rearrangements (299/345, 86.7%) carried IGHV genes with some impact of somatic hypermutation, from minimal to pronounced. Noticeably, 75/79 (95%) IGHV1-2(*)04 rearrangements were mutated; however, they mostly (56/75 cases; 74.6%) carried few mutations (97-99.9% germline identity) of conservative nature and restricted distribution. These distinctive features of the IG receptors indicate selection by (super)antigenic element(s) in the pathogenesis of SMZL. Furthermore, they raise the possibility that certain SMZL subtypes could derive from progenitor populations adapted to particular antigenic challenges through selection of VH domain specificities, in particular the IGHV1-2(*)04 allele.
Resumo:
Supply Chain Simulation (SCS) is applied to acquire information to support outsourcing decisions but obtaining enough detail in key parameters can often be a barrier to making well informed decisions.
One aspect of SCS that has been relatively unexplored is the impact of inaccurate data around delays within the SC. The impact of the magnitude and variability of process cycle time on typical performance indicators in a SC context is studied.
System cycle time, WIP levels and throughput are more sensitive to the magnitude of deterministic deviations in process cycle time than variable deviations. Manufacturing costs are not very sensitive to these deviations.
Future opportunities include investigating the impact of process failure or product defects, including logistics and transportation between SC members and using alternative costing methodologies.
Resumo:
L’un des problèmes importants en apprentissage automatique est de déterminer la complexité du modèle à apprendre. Une trop grande complexité mène au surapprentissage, ce qui correspond à trouver des structures qui n’existent pas réellement dans les données, tandis qu’une trop faible complexité mène au sous-apprentissage, c’est-à-dire que l’expressivité du modèle est insuffisante pour capturer l’ensemble des structures présentes dans les données. Pour certains modèles probabilistes, la complexité du modèle se traduit par l’introduction d’une ou plusieurs variables cachées dont le rôle est d’expliquer le processus génératif des données. Il existe diverses approches permettant d’identifier le nombre approprié de variables cachées d’un modèle. Cette thèse s’intéresse aux méthodes Bayésiennes nonparamétriques permettant de déterminer le nombre de variables cachées à utiliser ainsi que leur dimensionnalité. La popularisation des statistiques Bayésiennes nonparamétriques au sein de la communauté de l’apprentissage automatique est assez récente. Leur principal attrait vient du fait qu’elles offrent des modèles hautement flexibles et dont la complexité s’ajuste proportionnellement à la quantité de données disponibles. Au cours des dernières années, la recherche sur les méthodes d’apprentissage Bayésiennes nonparamétriques a porté sur trois aspects principaux : la construction de nouveaux modèles, le développement d’algorithmes d’inférence et les applications. Cette thèse présente nos contributions à ces trois sujets de recherches dans le contexte d’apprentissage de modèles à variables cachées. Dans un premier temps, nous introduisons le Pitman-Yor process mixture of Gaussians, un modèle permettant l’apprentissage de mélanges infinis de Gaussiennes. Nous présentons aussi un algorithme d’inférence permettant de découvrir les composantes cachées du modèle que nous évaluons sur deux applications concrètes de robotique. Nos résultats démontrent que l’approche proposée surpasse en performance et en flexibilité les approches classiques d’apprentissage. Dans un deuxième temps, nous proposons l’extended cascading Indian buffet process, un modèle servant de distribution de probabilité a priori sur l’espace des graphes dirigés acycliques. Dans le contexte de réseaux Bayésien, ce prior permet d’identifier à la fois la présence de variables cachées et la structure du réseau parmi celles-ci. Un algorithme d’inférence Monte Carlo par chaîne de Markov est utilisé pour l’évaluation sur des problèmes d’identification de structures et d’estimation de densités. Dans un dernier temps, nous proposons le Indian chefs process, un modèle plus général que l’extended cascading Indian buffet process servant à l’apprentissage de graphes et d’ordres. L’avantage du nouveau modèle est qu’il admet les connections entres les variables observables et qu’il prend en compte l’ordre des variables. Nous présentons un algorithme d’inférence Monte Carlo par chaîne de Markov avec saut réversible permettant l’apprentissage conjoint de graphes et d’ordres. L’évaluation est faite sur des problèmes d’estimations de densité et de test d’indépendance. Ce modèle est le premier modèle Bayésien nonparamétrique permettant d’apprendre des réseaux Bayésiens disposant d’une structure complètement arbitraire.
Resumo:
Abstract not available
Resumo:
In the deregulated Power markets it is necessary to have a appropriate Transmission Pricing methodology that also takes into account “Congestion and Reliability”, in order to ensure an economically viable, equitable, and congestion free power transfer capability, with high reliability and security. This thesis presents results of research conducted on the development of a Decision Making Framework (DMF) of concepts and data analytic and modelling methods for the Reliability benefits Reflective Optimal “cost evaluation for the calculation of Transmission Cost” for composite power systems, using probabilistic methods. The methodology within the DMF devised and reported in this thesis, utilises a full AC Newton-Raphson load flow and a Monte-Carlo approach to determine, Reliability Indices which are then used for the proposed Meta-Analytical Probabilistic Approach (MAPA) for the evaluation and calculation of the Reliability benefit Reflective Optimal Transmission Cost (ROTC), of a transmission system. This DMF includes methods for transmission line embedded cost allocation among transmission transactions, accounting for line capacity-use as well as congestion costing that can be used for pricing using application of Power Transfer Distribution Factor (PTDF) as well as Bialek’s method to determine a methodology which consists of a series of methods and procedures as explained in detail in the thesis for the proposed MAPA for ROTC. The MAPA utilises the Bus Data, Generator Data, Line Data, Reliability Data and Customer Damage Function (CDF) Data for the evaluation of Congestion, Transmission and Reliability costing studies using proposed application of PTDF and other established/proven methods which are then compared, analysed and selected according to the area/state requirements and then integrated to develop ROTC. Case studies involving standard 7-Bus, IEEE 30-Bus and 146-Bus Indian utility test systems are conducted and reported throughout in the relevant sections of the dissertation. There are close correlation between results obtained through proposed application of PTDF method with the Bialek’s and different MW-Mile methods. The novel contributions of this research work are: firstly the application of PTDF method developed for determination of Transmission and Congestion costing, which are further compared with other proved methods. The viability of developed method is explained in the methodology, discussion and conclusion chapters. Secondly the development of comprehensive DMF which helps the decision makers to analyse and decide the selection of a costing approaches according to their requirements. As in the DMF all the costing approaches have been integrated to achieve ROTC. Thirdly the composite methodology for calculating ROTC has been formed into suits of algorithms and MATLAB programs for each part of the DMF, which are further described in the methodology section. Finally the dissertation concludes with suggestions for Future work.
Resumo:
Los tres factores básicos a considerar en la fijación de precios de productos o servicios son los clientes, la competencia y los costos. Dos enfoques son básicamente utilizados a partir de la información de la ontabilidad Gerencial.El enfoque de Cost-Plus (costo más un margen) utiliza una fórmula general que agrega un margen(Mark up) al costo base, como punto de partida de la decisión de fijación de precios.Muchos costos distintos (como los absorbentes de producción o totales) pueden servir como base del costo, alaplicar la fórmula del costo más margen. Después los precios se modifican con base en las reacciones de los clientes y las respuestas de los competidores.El enfoque de fijación de precios más usado hoy es el Costo Objetivo (Target Cost), en el cual primero se fijael precio objetivo que están dispuestos a pagar los clientes por un producto o servicio. Una utilidad de operación objetivo por unidad se resta al precio objetivo para determinar el costo objetivo por unidad. El reto para la organización es hacer las mejoras necesarias en sus costos mediante la ingeniería de valor y el mejoramiento continuo.
Resumo:
El Mercado de Renta Variable en Colombia sigue estando en desarrollo, así como la confianza de los inversionistas a la hora de tomar decisiones de elección de portafolios óptimos de inversión, los cuales le brinden la maximización de los retornos esperados a un mínimo riesgo. Por lo anterior esta investigación explora y conoce más a fondo los sectores que conforman el mercado accionario y determina cual es más rentable que otro, a través del modelo propuesto por Harry Markowitz y teniendo en cuenta los avances a la teoría hecha por Sharpe a través del índice de Sharpe y Betas. Entre los sectores que conforman El Mercado de Renta Variable en Colombia está el Financiero, Materiales, Energía, Consumo Básico Servicios e Industrial; los cuales siguen la misma tendencia bajista que el Índice del Colcap, el cual en los últimos años está rentando negativamente. Por lo tanto con esta investigación el lector e inversionista cuenta con herramientas que aplican el modelo de Markowitz para vislumbrar de acuerdo a datos históricos, los sectores en los cuales se recomienda invertir y en los que por el contrario de acuerdo a la tendencia de debe desistir. Sin embargo, se aclara que esta investigación se basa en datos históricos, tendencias y cálculos matemáticos que pueden diferenciarse de la realidad actual, dado que por aspectos coyunturales económicos, políticos o sociales puede verse afectadas las rentabilidades de las acciones y sectores en los que decida invertir las personas.
Resumo:
Resumo:
In the last years the need to develop more environmentally friendly and efficient cars as led to the development of several technologies to improve the performance of internal combustion engines, a large part of the innovations are focused in the auxiliary systems of the engine, including, the oil pump, this is an element of great importance in the dynamics of the engine as well a considerable energy consumer. Most solutions for oil pumps to this day are fixed displacement, for medium and high speeds, the pump flow rate is higher than the needs of the engine, this excess flow leads to the need for recirculation of the fluid which represents a waste of energy. Recently, technological advances in this area have led to the creation of variable displacement oil pumps, these have become a 'must have' due to the numerous advantages they bring, although the working principle of vane or piston pumps is relatively well known, the application of this technology for the automotive industry is new and brings new challenges. The focus of this dissertation is to develop a new concept of variable displacement system for automotive oil pumps. The main objective is to obtain a concept that is totally adaptable to existing solutions on the market (engines), both dimensionally as in performance specifications, having at the same time an innovative mechanical system for obtaining variable displacement. The developed design is a vane pump with variable displacement going in line with existing commercial solutions, however, the variation of the eccentricity commonly used to provide an variable displacement delivery is not used, the variable displacement is achieved without varying the eccentricity of the system but with a variation of the length of the pumping chamber. The principle of operation of the pump is different to existing solutions while maintaining the ability to integrate standard parts such as control valves and mechanical safety valves, the pump is compatible with commercial solutions in terms of interfaces for connection between engine systems and pump. A concept prototype of the product was obtained in order to better evaluate the validity of the concept. The developed concept represents an innovation in oil pumps design, being unique in its mechanical system for variable displacement delivery.