942 resultados para Machine-tools - numerical control
Decision tools for bacterial blight resistance gene deployment in rice-based agricultural ecosystems
Resumo:
Attempting to achieve long-lasting and stable resistance using uniformly deployed rice varieties is not a sustainable approach. The real situation appears to be much more complex and dynamic, one in which pathogens quickly adapt to resistant varieties. To prevent disease epidemics, deployment should be customized and this decision will require interdisciplinary actions. This perspective article aims to highlight the current progress on disease resistance deployment to control bacterial blight in rice. Although the model system rice-Xanthomonas oryzae pv. oryzae has distinctive features that underpin the need for a case-by-case analysis, strategies to integrate those elements into a unique decision tool could be easily extended to other crops.
Resumo:
The Ingold port adaption of a free beam NIR spectrometer is tailored for optimal bioprocess monitoring and control. The device shows an excellent signal to noise ratio dedicated to a large free aperture and therefore a large sample volume. This can be seen particularly in the batch trajectories which show a high reproducibility. The robust and compact design withstands rough process environments as well as SIP/CIP cycles. Robust free beam NIR process analyzers are indispensable tools within the PAT/QbD framework for realtime process monitoring and control. They enable multiparametric, non-invasive measurements of analyte concentrations and process trajectories. Free beam NIR spectrometers are an ideal tool to define golden batches and process borders in the sense of QbD. Moreover, sophisticated data analysis both quantitative and MSPC yields directly to a far better process understanding. Information can be provided online in easy to interpret graphs which allow the operator to make fast and knowledge-based decisions. This finally leads to higher stability in process operation, better performance and less failed batches.
Resumo:
This dissertation mainly focuses on coordinated pricing and inventory management problems, where the related background is provided in Chapter 1. Several periodic-review models are then discussed in Chapters 2,3,4 and 5, respectively. Chapter 2 analyzes a deterministic single-product model, where a price adjustment cost incurs if the current selling price is changed from the previous period. We develop exact algorithms for the problem under different conditions and find out that computation complexity varies significantly associated with the cost structure. %Moreover, our numerical study indicates that dynamic pricing strategies may outperform static pricing strategies even when price adjustment cost accounts for a significant portion of the total profit. Chapter 3 develops a single-product model in which demand of a period depends not only on the current selling price but also on past prices through the so-called reference price. Strongly polynomial time algorithms are designed for the case without no fixed ordering cost, and a heuristic is proposed for the general case together with an error bound estimation. Moreover, our illustrates through numerical studies that incorporating reference price effect into coordinated pricing and inventory models can have a significant impact on firms' profits. Chapter 4 discusses the stochastic version of the model in Chapter 3 when customers are loss averse. It extends the associated results developed in literature and proves that the reference price dependent base-stock policy is proved to be optimal under a certain conditions. Instead of dealing with specific problems, Chapter 5 establishes the preservation of supermodularity in a class of optimization problems. This property and its extensions include several existing results in the literature as special cases, and provide powerful tools as we illustrate their applications to several operations problems: the stochastic two-product model with cross-price effects, the two-stage inventory control model, and the self-financing model.
Resumo:
In database applications, access control security layers are mostly developed from tools provided by vendors of database management systems and deployed in the same servers containing the data to be protected. This solution conveys several drawbacks. Among them we emphasize: 1) if policies are complex, their enforcement can lead to performance decay of database servers; 2) when modifications in the established policies implies modifications in the business logic (usually deployed at the client-side), there is no other possibility than modify the business logic in advance and, finally, 3) malicious users can issue CRUD expressions systematically against the DBMS expecting to identify any security gap. In order to overcome these drawbacks, in this paper we propose an access control stack characterized by: most of the mechanisms are deployed at the client-side; whenever security policies evolve, the security mechanisms are automatically updated at runtime and, finally, client-side applications do not handle CRUD expressions directly. We also present an implementation of the proposed stack to prove its feasibility. This paper presents a new approach to enforce access control in database applications, this way expecting to contribute positively to the state of the art in the field.
Resumo:
In database applications, access control security layers are mostly developed from tools provided by vendors of database management systems and deployed in the same servers containing the data to be protected. This solution conveys several drawbacks. Among them we emphasize: (1) if policies are complex, their enforcement can lead to performance decay of database servers; (2) when modifications in the established policies implies modifications in the business logic (usually deployed at the client-side), there is no other possibility than modify the business logic in advance and, finally, 3) malicious users can issue CRUD expressions systematically against the DBMS expecting to identify any security gap. In order to overcome these drawbacks, in this paper we propose an access control stack characterized by: most of the mechanisms are deployed at the client-side; whenever security policies evolve, the security mechanisms are automatically updated at runtime and, finally, client-side applications do not handle CRUD expressions directly. We also present an implementation of the proposed stack to prove its feasibility. This paper presents a new approach to enforce access control in database applications, this way expecting to contribute positively to the state of the art in the field.
Resumo:
Developments in theory and experiment have raised the prospect of an electronic technology based on the discrete nature of electron tunnelling through a potential barrier. This thesis deals with novel design and analysis tools developed to study such systems. Possible devices include those constructed from ultrasmall normal tunnelling junctions. These exhibit charging effects including the Coulomb blockade and correlated electron tunnelling. They allow transistor-like control of the transfer of single carriers, and present the prospect of digital systems operating at the information theoretic limit. As such, they are often referred to as single electronic devices. Single electronic devices exhibit self quantising logic and good structural tolerance. Their speed, immunity to thermal noise, and operating voltage all scale beneficially with junction capacitance. For ultrasmall junctions the possibility of room temperature operation at sub picosecond timescales seems feasible. However, they are sensitive to external charge; whether from trapping-detrapping events, externally gated potentials, or system cross-talk. Quantum effects such as charge macroscopic quantum tunnelling may degrade performance. Finally, any practical system will be complex and spatially extended (amplifying the above problems), and prone to fabrication imperfection. This summarises why new design and analysis tools are required. Simulation tools are developed, concentrating on the basic building blocks of single electronic systems; the tunnelling junction array and gated turnstile device. Three main points are considered: the best method of estimating capacitance values from physical system geometry; the mathematical model which should represent electron tunnelling based on this data; application of this model to the investigation of single electronic systems. (DXN004909)
Resumo:
El presente trabajo empleó herramientas de hardware y software de licencia libre para el establecimiento de una estación base celular (BTS) de bajo costo y fácil implementación. Partiendo de conceptos técnicos que facilitan la instalación del sistema OpenBTS y empleando el hardware USRP N210 (Universal Software Radio Peripheral) permitieron desplegar una red análoga al estándar de telefonía móvil (GSM). Usando los teléfonos móviles como extensiones SIP (Session Initiation Protocol) desde Asterisk, logrando ejecutar llamadas entre los terminales, mensajes de texto (SMS), llamadas desde un terminal OpenBTS hacia otra operadora móvil, entre otros servicios.
Resumo:
215 p.
Resumo:
This paper describes the application of a Brain Emotional Learning (BEL) controller to improve the response of a SDOF structural system under an earthquake excitation using a magnetorheological (MR) damper. The main goal is to study the performance of a BEL based semi-active control system to generate the control signal for a MR damper. The proposed approach consists of a two controllers: a primary controller based on a BEL algorithm that determines the desired damping force from the system response and a secondary controller that modifies the input current to the MR damper to generate a reference damping force. A parametric model of the damper is used to predict the damping force based on the piston motion and also the current input. A Simulink model of the structural system is developed to analyze the effectiveness of the semi-active controller. Finally, the numerical results are presented and discussed.
Resumo:
This article is concerned with the numerical detection of bifurcation points of nonlinear partial differential equations as some parameter of interest is varied. In particular, we study in detail the numerical approximation of the Bratu problem, based on exploiting the symmetric version of the interior penalty discontinuous Galerkin finite element method. A framework for a posteriori control of the discretization error in the computed critical parameter value is developed based upon the application of the dual weighted residual (DWR) approach. Numerical experiments are presented to highlight the practical performance of the proposed a posteriori error estimator.
Resumo:
We study a reaction–diffusion mathematical model for the evolution of atherosclerosis as an inflammation process by combining analytical tools with computer-intensive numerical calculations. The computational work involved the calculation of more than sixty thousand solutions of the full reaction–diffusion system and lead to the complete characterisation of the ωω-limit for every initial condition. Qualitative properties of the solution are rigorously proved, some of them hinted at by the numerical study
Resumo:
In this article we consider the a posteriori error estimation and adaptive mesh refinement of discontinuous Galerkin finite element approximations of the bifurcation problem associated with the steady incompressible Navier-Stokes equations. Particular attention is given to the reliable error estimation of the critical Reynolds number at which a steady pitchfork or Hopf bifurcation occurs when the underlying physical system possesses reflectional or Z_2 symmetry. Here, computable a posteriori error bounds are derived based on employing the generalization of the standard Dual-Weighted-Residual approach, originally developed for the estimation of target functionals of the solution, to bifurcation problems. Numerical experiments highlighting the practical performance of the proposed a posteriori error indicator on adaptively refined computational meshes are presented.
Resumo:
2009
Resumo:
It is remarkable that there are no deployed military hybrid vehicles since battlefield fuel is approximately 100 times the cost of civilian fuel. In the commercial marketplace, where fuel prices are much lower, electric hybrid vehicles have become increasingly common due to their increased fuel efficiency and the associated operating cost benefit. An absence of military hybrid vehicles is not due to a lack of investment in research and development, but rather because applying hybrid vehicle architectures to a military application has unique challenges. These challenges include inconsistent duty cycles for propulsion requirements and the absence of methods to look at vehicle energy in a holistic sense. This dissertation provides a remedy to these challenges by presenting a method to quantify the benefits of a military hybrid vehicle by regarding that vehicle as a microgrid. This innovative concept allowed for the creation of an expandable multiple input numerical optimization method that was implemented for both real-time control and system design optimization. An example of each of these implementations was presented. Optimization in the loop using this new method was compared to a traditional closed loop control system and proved to be more fuel efficient. System design optimization using this method successfully illustrated battery size optimization by iterating through various electric duty cycles. By utilizing this new multiple input numerical optimization method, a holistic view of duty cycle synthesis, vehicle energy use, and vehicle design optimization can be achieved.
Resumo:
Several modern-day cooling applications require the incorporation of mini/micro-channel shear-driven flow condensers. There are several design challenges that need to be overcome in order to meet those requirements. The difficulty in developing effective design tools for shear-driven flow condensers is exacerbated due to the lack of a bridge between the physics-based modelling of condensing flows and the current, popular approach based on semi-empirical heat transfer correlations. One of the primary contributors of this disconnect is a lack of understanding caused by the fact that typical heat transfer correlations eliminate the dependence of the heat transfer coefficient on the method of cooling employed on the condenser surface when it may very well not be the case. This is in direct contrast to direct physics-based modeling approaches where the thermal boundary conditions have a direct and huge impact on the heat transfer coefficient values. Typical heat transfer correlations instead introduce vapor quality as one of the variables on which the value of the heat transfer coefficient depends. This study shows how, under certain conditions, a heat transfer correlation from direct physics-based modeling can be equivalent to typical engineering heat transfer correlations without making the same apriori assumptions. Another huge factor that raises doubts on the validity of the heat-transfer correlations is the opacity associated with the application of flow regime maps for internal condensing flows. It is well known that flow regimes influence heat transfer rates strongly. However, several heat transfer correlations ignore flow regimes entirely and present a single heat transfer correlation for all flow regimes. This is believed to be inaccurate since one would expect significant differences in the heat transfer correlations for different flow regimes. Several other studies present a heat transfer correlation for a particular flow regime - however, they ignore the method by which extents of the flow regime is established. This thesis provides a definitive answer (in the context of stratified/annular flows) to: (i) whether a heat transfer correlation can always be independent of the thermal boundary condition and represented as a function of vapor quality, and (ii) whether a heat transfer correlation can be independently obtained for a flow regime without knowing the flow regime boundary (even if the flow regime boundary is represented through a separate and independent correlation). To obtain the results required to arrive at an answer to these questions, this study uses two numerical simulation tools - the approximate but highly efficient Quasi-1D simulation tool and the exact but more expensive 2D Steady Simulation tool. Using these tools and the approximate values of flow regime transitions, a deeper understanding of the current state of knowledge in flow regime maps and heat transfer correlations in shear-driven internal condensing flows is obtained. The ideas presented here can be extended for other flow regimes of shear-driven flows as well. Analogous correlations can also be obtained for internal condensers in the gravity-driven and mixed-driven configuration.