899 resultados para Hyperbolic Dynamic System
Resumo:
Spacecraft formation flying navigation continues to receive a great deal of interest. The research presented in this dissertation focuses on developing methods for estimating spacecraft absolute and relative positions, assuming measurements of only relative positions using wireless sensors. The implementation of the extended Kalman filter to the spacecraft formation navigation problem results in high estimation errors and instabilities in state estimation at times. This is due tp the high nonlinearities in the system dynamic model. Several approaches are attempted in this dissertation aiming at increasing the estimation stability and improving the estimation accuracy. A differential geometric filter is implemented for spacecraft positions estimation. The differential geometric filter avoids the linearization step (which is always carried out in the extended Kalman filter) through a mathematical transformation that converts the nonlinear system into a linear system. A linear estimator is designed in the linear domain, and then transformed back to the physical domain. This approach demonstrated better estimation stability for spacecraft formation positions estimation, as detailed in this dissertation. The constrained Kalman filter is also implemented for spacecraft formation flying absolute positions estimation. The orbital motion of a spacecraft is characterized by two range extrema (perigee and apogee). At the extremum, the rate of change of a spacecraft’s range vanishes. This motion constraint can be used to improve the position estimation accuracy. The application of the constrained Kalman filter at only two points in the orbit causes filter instability. Two variables are introduced into the constrained Kalman filter to maintain the stability and improve the estimation accuracy. An extended Kalman filter is implemented as a benchmark for comparison with the constrained Kalman filter. Simulation results show that the constrained Kalman filter provides better estimation accuracy as compared with the extended Kalman filter. A Weighted Measurement Fusion Kalman Filter (WMFKF) is proposed in this dissertation. In wireless localizing sensors, a measurement error is proportional to the distance of the signal travels and sensor noise. In this proposed Weighted Measurement Fusion Kalman Filter, the signal traveling time delay is not modeled; however, each measurement is weighted based on the measured signal travel distance. The obtained estimation performance is compared to the standard Kalman filter in two scenarios. The first scenario assumes using a wireless local positioning system in a GPS denied environment. The second scenario assumes the availability of both the wireless local positioning system and GPS measurements. The simulation results show that the WMFKF has similar accuracy performance as the standard Kalman Filter (KF) in the GPS denied environment. However, the WMFKF maintains the position estimation error within its expected error boundary when the WLPS detection range limit is above 30km. In addition, the WMFKF has a better accuracy and stability performance when GPS is available. Also, the computational cost analysis shows that the WMFKF has less computational cost than the standard KF, and the WMFKF has higher ellipsoid error probable percentage than the standard Measurement Fusion method. A method to determine the relative attitudes between three spacecraft is developed. The method requires four direction measurements between the three spacecraft. The simulation results and covariance analysis show that the method’s error falls within a three sigma boundary without exhibiting any singularity issues. A study of the accuracy of the proposed method with respect to the shape of the spacecraft formation is also presented.
DESIGN AND IMPLEMENT DYNAMIC PROGRAMMING BASED DISCRETE POWER LEVEL SMART HOME SCHEDULING USING FPGA
Resumo:
With the development and capabilities of the Smart Home system, people today are entering an era in which household appliances are no longer just controlled by people, but also operated by a Smart System. This results in a more efficient, convenient, comfortable, and environmentally friendly living environment. A critical part of the Smart Home system is Home Automation, which means that there is a Micro-Controller Unit (MCU) to control all the household appliances and schedule their operating times. This reduces electricity bills by shifting amounts of power consumption from the on-peak hour consumption to the off-peak hour consumption, in terms of different “hour price”. In this paper, we propose an algorithm for scheduling multi-user power consumption and implement it on an FPGA board, using it as the MCU. This algorithm for discrete power level tasks scheduling is based on dynamic programming, which could find a scheduling solution close to the optimal one. We chose FPGA as our system’s controller because FPGA has low complexity, parallel processing capability, a large amount of I/O interface for further development and is programmable on both software and hardware. In conclusion, it costs little time running on FPGA board and the solution obtained is good enough for the consumers.
Resumo:
BACKGROUND: Engineered nanoparticles are becoming increasingly ubiquitous and their toxicological effects on human health, as well as on the ecosystem, have become a concern. Since initial contact with nanoparticles occurs at the epithelium in the lungs (or skin, or eyes), in vitro cell studies with nanoparticles require dose-controlled systems for delivery of nanoparticles to epithelial cells cultured at the air-liquid interface. RESULTS: A novel air-liquid interface cell exposure system (ALICE) for nanoparticles in liquids is presented and validated. The ALICE generates a dense cloud of droplets with a vibrating membrane nebulizer and utilizes combined cloud settling and single particle sedimentation for fast (~10 min; entire exposure), repeatable (<12%), low-stress and efficient delivery of nanoparticles, or dissolved substances, to cells cultured at the air-liquid interface. Validation with various types of nanoparticles (Au, ZnO and carbon black nanoparticles) and solutes (such as NaCl) showed that the ALICE provided spatially uniform deposition (<1.6% variability) and had no adverse effect on the viability of a widely used alveolar human epithelial-like cell line (A549). The cell deposited dose can be controlled with a quartz crystal microbalance (QCM) over a dynamic range of at least 0.02-200 mug/cm(2). The cell-specific deposition efficiency is currently limited to 0.072 (7.2% for two commercially available 6-er transwell plates), but a deposition efficiency of up to 0.57 (57%) is possible for better cell coverage of the exposure chamber. Dose-response measurements with ZnO nanoparticles (0.3-8.5 mug/cm(2)) showed significant differences in mRNA expression of pro-inflammatory (IL-8) and oxidative stress (HO-1) markers when comparing submerged and air-liquid interface exposures. Both exposure methods showed no cellular response below 1 mug/cm(2 )ZnO, which indicates that ZnO nanoparticles are not toxic at occupationally allowed exposure levels. CONCLUSION: The ALICE is a useful tool for dose-controlled nanoparticle (or solute) exposure of cells at the air-liquid interface. Significant differences between cellular response after ZnO nanoparticle exposure under submerged and air-liquid interface conditions suggest that pharmaceutical and toxicological studies with inhaled (nano-)particles should be performed under the more realistic air-liquid interface, rather than submerged cell conditions.
Resumo:
OBJECTIVE: The purpose of this study was to compare a standard peripheral end-hole angiocatheter with those modified with side holes or side slits using experimental optical techniques to qualitatively compare the contrast material exit jets and using numeric techniques to provide flow visualization and quantitative comparisons. MATERIALS AND METHODS: A Schlieren imaging system was used to visualize the angiocatheter exit jet fluid dynamics at two different flow rates. Catheters were modified by drilling through-and-through side holes or by cutting slits into the catheters. A commercial computational fluid dynamics package was used to calculate numeric results for various vessel diameters and catheter orientations. RESULTS: Experimental images showed that modifying standard peripheral IV angiocatheters with side holes or side slits qualitatively changed the overall flow field and caused the exiting jet to become less well defined. Numeric calculations showed that the addition of side holes or slits resulted in a 9-30% reduction of the velocity of contrast material exiting the end hole of the angiocatheter. With the catheter tip directed obliquely to the wall, the maximum wall shear stress was always highest for the unmodified catheter and was always lowest for the four-side-slit catheter. CONCLUSION: Modified angiocatheters may have the potential to reduce extravasation events in patients by reducing vessel wall shear stress.
Resumo:
Most languages fall into one of two camps: either they adopt a unique, static type system, or they abandon static type-checks for run-time checks. Pluggable types blur this division by (i) making static type systems optional, and (ii) supporting a choice of type systems for reasoning about different kinds of static properties. Dynamic languages can then benefit from static-checking without sacrificing dynamic features or committing to a unique, static type system. But the overhead of adopting pluggable types can be very high, especially if all existing code must be decorated with type annotations before any type-checking can be performed. We propose a practical and pragmatic approach to introduce pluggable type systems to dynamic languages. First of all, only annotated code is type-checked. Second, limited type inference is performed on unannotated code to reduce the number of reported errors. Finally, external annotations can be used to type third-party code. We present Typeplug, a Smalltalk implementation of our framework, and report on experience applying the framework to three different pluggable type systems.
Resumo:
A large body of research analyzes the runtime execution of a system to extract abstract behavioral views. Those approaches primarily analyze control flow by tracing method execution events or they analyze object graphs of heap snapshots. However, they do not capture how objects are passed through the system at runtime. We refer to the exchange of objects as the object flow, and we claim that object flow is necessary to analyze if we are to understand the runtime of an object-oriented application. We propose and detail Object Flow Analysis, a novel dynamic analysis technique that takes this new information into account. To evaluate its usefulness, we present a visual approach that allows a developer to study classes and components in terms of how they exchange objects at runtime. We illustrate our approach on three case studies.
Resumo:
Mainstream IDEs such as Eclipse support developers in managing software projects mainly by offering static views of the source code. Such a static perspective neglects any information about runtime behavior. However, object-oriented programs heavily rely on polymorphism and late-binding, which makes them difficult to understand just based on their static structure. Developers thus resort to debuggers or profilers to study the system's dynamics. However, the information provided by these tools is volatile and hence cannot be exploited to ease the navigation of the source space. In this paper we present an approach to augment the static source perspective with dynamic metrics such as precise runtime type information, or memory and object allocation statistics. Dynamic metrics can leverage the understanding for the behavior and structure of a system. We rely on dynamic data gathering based on aspects to analyze running Java systems. By solving concrete use cases we illustrate how dynamic metrics directly available in the IDE are useful. We also comprehensively report on the efficiency of our approach to gather dynamic metrics.
Resumo:
In conventional software applications, synchronization code is typically interspersed with functional code, thereby impacting understandability and maintainability of the code base. At the same time, the synchronization defined statically in the code is not capable of adapting to different runtime situations. We propose a new approach to concurrency control which strictly separates the functional code from the synchronization requirements to be used and which adapts objects to be synchronized dynamically to their environment. First-class synchronization specifications express safety requirements, and a dynamic synchronization system dynamically adapts objects to different runtime situations. We present an overview of a prototype of our approach together with several classical concurrency problems, and we discuss open issues for further research.
Resumo:
In this paper we analyze a dynamic agency problem where contracting parties do not know the agent's future productivity at the beginning of the relationship. We consider a two-period model where both the agent and the principal observe the agent's second-period productivity at the end of the first period. This observation is assumed to be non-verifiable information. We compare long-term contracts with short-term contracts with respect to their suitability to motivate effort in both periods. On the one hand, short-term contracts allow for a better fine-tuning of second-period incentives as they can be aligned with the agent's second-period productivity. On the other hand, in short-term contracts first-period effort incentives might be distorted as contracts have to be sequentially optimal. Hence, the difference between long-term and short-term contracts is characterized by a trade-off between inducing effort in the first and in the second period. We analyze the determinants of this trade-off and demonstrate its implications for performance measurement and information system design.
Resumo:
Interactive ray tracing of non-trivial scenes is just becoming feasible on single graphics processing units (GPU). Recent work in this area focuses on building effective acceleration structures, which work well under the constraints of current GPUs. Most approaches are targeted at static scenes and only allow navigation in the virtual scene. So far support for dynamic scenes has not been considered for GPU implementations. We have developed a GPU-based ray tracing system for dynamic scenes consisting of a set of individual objects. Each object may independently move around, but its geometry and topology are static.
Resumo:
Continuous conveyors with a dynamic merge were developed with adaptable control equipment to differentiate these merges from competing Stop-and-Go merges. With a dynamic merge, the partial flows are manipulated by influencing speeds so that transport units need not stop for the merge. This leads to a more uniform flow of materials, which is qualitatively observable and verifiable in long-term measurements. And although this type of merge is visually mesmerizing, does it lead to advantages from the view of material flow technology? Our study with real data indicates that a dynamic merge shows a 24% increase in performance, but only for symmetric or nearly symmetric flows. This performance advantage decreases as the flows become less symmetric, approaching the throughput of traditional Stop-and-Go merges. And with a cost premium for a continuous merge of approximately 10% due to the additional technical components (belt conveyor, adjustable drive engines, software, etc.), this restricts their economical use.
Resumo:
Prior studies suggest that clients need to actively govern knowledge transfer to vendor staff in offshore outsourcing. In this paper, we analyze longitudinal data from four software maintenance offshore out-sourcing projects to explore why governance may be needed for knowledge transfer and how governance and the individual learning of vendor engineers inter-act over time. Our results suggest that self-control is central to learning, but may be hampered by low levels of trust and expertise at the outset of projects. For these foundations to develop, clients initially need to exert high amounts of formal and clan controls to enforce learning activities against barriers to knowledge sharing. Once learning activities occur, trust and expertise increase and control portfolios may show greater emphases on self-control.
Resumo:
Capital cities that are not the economic centers of their nations – so-called secondary capital cities (SSCs) – tend to be overlooked in the field of political science. Consequentially, there is a lack of research and resulting theory describing their local economy and their public policies. This paper analyzes how SCCs try to develop and position themselves through the formulation of locational policies. By linking three different theoretical strands – the Regional Innovation System (RIS) approach, the concept of locational policies, and the regime perspective – this paper aims for constructing a framework to study the economic and political dynamics in SCCs.