935 resultados para System modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report summarizes the work done for the Vehicle Powertrain Modeling and Design Problem Proposal portion of the EcoCAR3 proposal as specified in the Request for Proposal from Argonne National Laboratory. The results of the modeling exercises presented in the proposal showed that: An average conventional vehicle powered by a combustion engine could not meet the energy consumption target when the engine was sized to meet the acceleration target, due the relatively low thermal efficiency of the spark ignition engine. A battery electric vehicle could not meet the required range target of 320 km while keeping the vehicle weight below the gross vehicle weight rating of 2000 kg. This was due to the low energy density of the batteries which necessitated a large, and heavy, battery pack to provide enough energy to meet the range target. A series hybrid electric vehicle has the potential to meet the acceleration and energy consumption parameters when the components are optimally sized. A parallel hybrid electric vehicle has less energy conversion losses than a series hybrid electric vehicle which results in greater overall efficiency, lower energy consumption, and less emissions. For EcoCAR3, Michigan Tech proposes to develop a plug-in parallel hybrid vehicle (PPHEV) powered by a small Diesel engine operating on B20 Bio-Diesel fuel. This architecture was chosen over other options due to its compact design, lower cost, and its ability to provide performance levels and energy efficiency that meet or exceed the design targets. While this powertrain configuration requires a more complex control system and strategy than others, the student engineering team at Michigan Tech has significant recent experience with this architecture and has confidence that it will perform well in the events planned for the EcoCAR3 competition.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The past few years, multimodal interaction has been gaining importance in virtual environments. Although multimodality renders interacting with an environment more natural and intuitive, the development cycle of such an application is often long and expensive. In our overall field of research, we investigate how modelbased design can facilitate the development process by designing environments through the use of highlevel diagrams. In this scope, we present ‘NiMMiT’, a graphical notation for expressing and evaluating multimodal user interaction; we elaborate on the NiMMiT primitives and demonstrate its use by means of a comprehensive example.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Telescopic systems of structural members with clearance are found in many applications, e.g., mobile cranes, rack feeders, fork lifters, stacker cranes (see Figure 1). Operating these machines, undesirable vibrations may reduce the performance and increase safety problems. Therefore, this contribution has the aim to reduce these harmful vibrations. For a better understanding, the dynamic behaviour of these constructions is analysed. The main interest is the overlapping area of each two sections of the above described systems (see markings in Figure 1) which is investigated by measurements and by computations. A test rig is constructed to determine the dynamic behaviour by measuring fundamental vibrations and higher frequent oscillations, damping coefficients, special appearances and more. For an appropriate physical model, the governing boundary value problem is derived by applying Hamilton’s principle and a classical discretisation procedure is used to generate a coupled system of nonlinear ordinary differential equations as the corresponding truncated mathematical model. On the basis of this model, a controller concept for preventing harmful vibrations is developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PDP++ is a freely available, open source software package designed to support the development, simulation, and analysis of research-grade connectionist models of cognitive processes. It supports most popular parallel distributed processing paradigms and artificial neural network architectures, and it also provides an implementation of the LEABRA computational cognitive neuroscience framework. Models are typically constructed and examined using the PDP++ graphical user interface, but the system may also be extended through the incorporation of user-written C++ code. This article briefly reviews the features of PDP++, focusing on its utility for teaching cognitive modeling concepts and skills to university undergraduate and graduate students. An informal evaluation of the software as a pedagogical tool is provided, based on the author’s classroom experiences at three research universities and several conference-hosted tutorials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For broadcasting purposes MIXED REALITY, the combination of real and virtual scene content, has become ubiquitous nowadays. Mixed Reality recording still requires expensive studio setups and is often limited to simple color keying. We present a system for Mixed Reality applications which uses depth keying and provides threedimensional mixing of real and artificial content. It features enhanced realism through automatic shadow computation which we consider a core issue to obtain realism and a convincing visual perception, besides the correct alignment of the two modalities and correct occlusion handling. Furthermore we present a possibility to support placement of virtual content in the scene. Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content and dynamic object tracking for content planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our knowledge about the lunar environment is based on a large volume of ground-based, remote, and in situ observations. These observations have been conducted at different times and sampled different pieces of such a complex system as the surface-bound exosphere of the Moon. Numerical modeling is the tool that can link results of these separate observations into a single picture. Being validated against previous measurements, models can be used for predictions and interpretation of future observations results. In this paper we present a kinetic model of the sodium exosphere of the Moon as well as results of its validation against a set of ground-based and remote observations. The unique characteristic of the model is that it takes the orbital motion of the Moon and the Earth into consideration and simulates both the exosphere as well as the sodium tail self-consistently. The extended computational domain covers the part of the Earth’s orbit at new Moon, which allows us to study the effect of Earth’s gravity on the lunar sodium tail. The model is fitted to a set of ground-based and remote observations by tuning sodium source rate as well as values of sticking, and accommodation coefficients. The best agreement of the model results with the observations is reached when all sodium atoms returning from the exosphere stick to the surface and the net sodium escape rate is about 5.3 × 1022 s−1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Both climate change and socio-economic development will significantly modify the supply and consumption of water in future. Consequently, regional development has to face aggravation of existing or emergence of new conflicts of interest. In this context, transdisciplinary co-production of knowledge is considered as an important means for coping with these challenges. Accordingly, the MontanAqua project aims at developing strategies for more sustainable water management in the study area Crans-Montana-Sierre (Switzerland) in a transdisciplinary way. It strives for co-producing system, target and transformation knowledge among researchers, policy makers, public administration and civil society organizations. The research process basically consisted of the following steps: First, the current water situation in the study region was investigated. How much water is available? How much water is being used? How are decisions on water distribution and use taken? Second, participatory scenario workshops were conducted in order to identify the stakeholders’ visions of regional development. Third, the water situation in 2050 was simulated by modeling the evolution of water resources and water use and by reflecting on the institutional aspects. These steps laid ground for jointly assessing the consequences of the stakeholders’ visions of development in view of scientific data regarding governance, availability and use of water in the region as well as developing necessary transformation knowledge. During all of these steps researchers have collaborated with stakeholders in the support group RegiEau. The RegiEau group consists of key representatives of owners, managers, users, and pressure groups related to water and landscape: representatives of the communes (mostly the presidents), the canton (administration and parliament), water management associations, agriculture, viticulture, hydropower, tourism, and landscape protection. The aim of the talk is to explore potentials and constraints of scientific modeling of water availability and use within the process of transdisciplinary co-producing strategies for more sustainable water governance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite a broad range of collaboration tools already available, enterprises continue to look for ways to improve internal and external communication. Microblogging is such a new communication channel with some considerable potential to improve intra-firm transparency and knowledge sharing. However, the adoption of such social software presents certain challenges to enterprises. Based on the results of four focus group sessions, we identified several new constructs to play an important role in the microblogging adoption decision. Examples include privacy concerns, communication benefits, perceptions regarding signal-to-noise ratio, as well codification effort. Integrating these findings with common views on technology acceptance, we formulate a model to predict the adoption of a microblogging system in the workspace. Our findings serve as an important guideline for managers seeking to realize the potential of microblogging in their company.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Early warning of future hypoglycemic and hyperglycemic events can improve the safety of type 1 diabetes mellitus (T1DM) patients. The aim of this study is to design and evaluate a hypoglycemia / hyperglycemia early warning system (EWS) for T1DM patients under sensor-augmented pump (SAP) therapy. Methods: The EWS is based on the combination of data-driven online adaptive prediction models and a warning algorithm. Three modeling approaches have been investigated: (i) autoregressive (ARX) models, (ii) auto-regressive with an output correction module (cARX) models, and (iii) recurrent neural network (RNN) models. The warning algorithm performs postprocessing of the models′ outputs and issues alerts if upcoming hypoglycemic/hyperglycemic events are detected. Fusion of the cARX and RNN models, due to their complementary prediction performances, resulted in the hybrid autoregressive with an output correction module/recurrent neural network (cARN)-based EWS. Results: The EWS was evaluated on 23 T1DM patients under SAP therapy. The ARX-based system achieved hypoglycemic (hyperglycemic) event prediction with median values of accuracy of 100.0% (100.0%), detection time of 10.0 (8.0) min, and daily false alarms of 0.7 (0.5). The respective values for the cARX-based system were 100.0% (100.0%), 17.5 (14.8) min, and 1.5 (1.3) and, for the RNN-based system, were 100.0% (92.0%), 8.4 (7.0) min, and 0.1 (0.2). The hybrid cARN-based EWS presented outperforming results with 100.0% (100.0%) prediction accuracy, detection 16.7 (14.7) min in advance, and 0.8 (0.8) daily false alarms. Conclusion: Combined use of cARX and RNN models for the development of an EWS outperformed the single use of each model, achieving accurate and prompt event prediction with few false alarms, thus providing increased safety and comfort.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: The landscape metaphor allows viewing corrective experiences (CE) as pathway to a state with relatively lower 'tension' (local minimum). However, such local minima are not easily accessible but obstructed by states with relatively high tension (local maxima) according to the landscape metaphor (Caspar & Berger, 2012). For example, an individual with spider phobia has to transiently tolerate high levels of tension during an exposure therapy to access the local minimum of habituation. To allow for more specific therapeutic guidelines and empirically testable hypotheses, we advance the landscape metaphor to a scientific model which bases on motivational processes. Specifically, we conceptualize CEs as available but unusual trajectories (=pathways) through a motivational space. The dimensions of the motivational state are set up by basic motives such as need for agency or attachment. Methods: Dynamic system theory is used to model motivational states and trajectories using mathematical equations. Fortunately, these equations have easy-to-comprehend and intuitive visual representations similar to the landscape metaphor. Thus, trajectories that represent CEs are informative and action guiding for both therapists and patients without knowledge on dynamic systems. However, the mathematical underpinnings of the model allow researchers to deduct hypotheses for empirical testing. Results: First, the results of simulations of CEs during exposure therapy in anxiety disorders are presented and compared to empirical findings. Second, hypothetical CEs in an autonomy-attachment conflict are reported from a simulation study. Discussion: Preliminary clinical implications for the evocation of CEs are drawn after a critical discussion of the proposed model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.