947 resultados para Reduced-order Model
Resumo:
Research on the perception of temporal order uses either temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks, in both of which two stimuli are presented with some temporal delay and observers must judge the order of presentation. Results generally differ across tasks, raising concerns about whether they measure the same processes. We present a model including sensory and decisional parameters that places these tasks in a common framework that allows studying their implications on observed performance. TOJ tasks imply specific decisional components that explain the discrepancy of results obtained with TOJ and SJ tasks. The model is also tested against published data on audiovisual temporal-order judgments, and the fit is satisfactory, although model parameters are more accurately estimated with SJ tasks. Measures of latent point of subjective simultaneity and latent sensitivity are defined that are invariant across tasks by isolating the sensory parameters governing observed performance, whereas decisional parameters vary across tasks and account for observed differences across them. Our analyses concur with other evidence advising against the use of TOJ tasks in research on perception of temporal order.
Resumo:
Trials in a temporal two-interval forced-choice discrimination experiment consist of two sequential intervals presenting stimuli that differ from one another as to magnitude along some continuum. The observer must report in which interval the stimulus had a larger magnitude. The standard difference model from signal detection theory analyses poses that order of presentation should not affect the results of the comparison, something known as the balance condition (J.-C. Falmagne, 1985, in Elements of Psychophysical Theory). But empirical data prove otherwise and consistently reveal what Fechner (1860/1966, in Elements of Psychophysics) called time-order errors, whereby the magnitude of the stimulus presented in one of the intervals is systematically underestimated relative to the other. Here we discuss sensory factors (temporary desensitization) and procedural glitches (short interstimulus or intertrial intervals and response bias) that might explain the time-order error, and we derive a formal model indicating how these factors make observed performance vary with presentation order despite a single underlying mechanism. Experimental results are also presented illustrating the conventional failure of the balance condition and testing the hypothesis that time-order errors result from contamination by the factors included in the model.
Resumo:
In this dissertation, we study the behavior of exciton-polariton quasiparticles in semiconductor microcavities, under the sourceless and lossless conditions.
First, we simplify the original model by removing the photon dispersion term, thus effectively turn the PDEs system to an ODEs system,
and investigate the behavior of the resulting system, including the equilibrium points and the wave functions of the excitons and the photons.
Second, we add the dispersion term for the excitons to the original model and prove that the band of the discontinuous solitons now become dark solitons.
Third, we employ the Strang-splitting method to our sytem of PDEs and prove the first-order and second-order error bounds in the $H^1$ norm and the $L_2$ norm, respectively.
Using this numerical result, we analyze the stability of the steady state bright soliton solution. This solution revolves around the $x$-axis as time progresses
and the perturbed soliton also rotates around the $x$-axis and tracks closely in terms of amplitude but lags behind the exact one. Our numerical result shows orbital
stability but no $L_2$ stability.
Resumo:
This paper examines the effects of higher-order risk attitudes and statistical moments on the optimal allocation of risky assets within the standard portfolio choice model. We derive the expressions for the optimal proportion of wealth invested in the risky asset to show they are functions of portfolio returns third- and fourth-order moments as well as on the investor’s risk preferences of prudence and temperance. We illustrate the relative importance that the introduction of those higher-order effects have in the decision of expected utility maximizers using data for the US.
Resumo:
Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.
Resumo:
Otto-von-Guericke-Universität Magdeburg, Fakultät für Mathematik, Dissertation, 2016
Resumo:
Otto-von-Guericke-Universität Magdeburg, Fakultät für Mathematik, Dissertation, 2016
Resumo:
In this paper, the temperature of a pilot-scale batch reaction system is modeled towards the design of a controller based on the explicit model predictive control (EMPC) strategy -- Some mathematical models are developed from experimental data to describe the system behavior -- The simplest, yet reliable, model obtained is a (1,1,1)-order ARX polynomial model for which the mentioned EMPC controller has been designed -- The resultant controller has a reduced mathematical complexity and, according to the successful results obtained in simulations, will be used directly on the real control system in a next stage of the entire experimental framework
Resumo:
The ability to predict the properties of magnetic materials in a device is essential to ensuring the correct operation and optimization of the design as well as the device behavior over a wide range of input frequencies. Typically, development and simulation of wide-bandwidth models requires detailed, physics-based simulations that utilize significant computational resources. Balancing the trade-offs between model computational overhead and accuracy can be cumbersome, especially when the nonlinear effects of saturation and hysteresis are included in the model. This study focuses on the development of a system for analyzing magnetic devices in cases where model accuracy and computational intensity must be carefully and easily balanced by the engineer. A method for adjusting model complexity and corresponding level of detail while incorporating the nonlinear effects of hysteresis is presented that builds upon recent work in loss analysis and magnetic equivalent circuit (MEC) modeling. The approach utilizes MEC models in conjunction with linearization and model-order reduction techniques to process magnetic devices based on geometry and core type. The validity of steady-state permeability approximations is also discussed.
Resumo:
Virtual-build-to-order (VBTO) is a form of order fulfilment system in which the producer has the ability to search across the entire pipeline of finished stock, products in production and those in the production plan, in order to find the best product for a customer. It is a system design that is attractive to Mass Customizers, such as those in the automotive sector, whose manufacturing lead time exceeds their customers' tolerable waiting times, and for whom the holding of partly-finished stocks at a fixed decoupling point is unattractive or unworkable. This paper describes and develops the operational concepts that underpin VBTO, in particular the concepts of reconfiguration flexibility and customer aversion to waiting. Reconfiguration is the process of changing a product's specification at any point along the order fulfilment pipeline. The extent to which an order fulfilment system is flexible or inflexible reveals itself in the reconfiguration cost curve, of which there are four basic types. The operational features of the generic VBTO system are described and simulation is used to study its behaviour and performance. The concepts of reconfiguration flexibility and floating decoupling point are introduced and discussed.
Resumo:
Mental stress is known to disrupt the execution of motor performance and can lead to decrements in the quality of performance, however, individuals have shown significant differences regarding how fast and well they can perform a skilled task according to how well they can manage stress and emotion. The purpose of this study was to advance our understanding of how the brain modulates emotional reactivity under different motivational states to achieve differential performance in a target shooting task that requires precision visuomotor coordination. In order to study the interactions in emotion regulatory brain areas (i.e. the ventral striatum, amygdala, prefrontal cortex) and the autonomic nervous system, reward and punishment interventions were employed and the resulting behavioral and physiological responses contrasted to observe the changes in shooting performance (i.e. shooting accuracy and stability of aim) and neuro-cognitive processes (i.e. cognitive load and reserve) during the shooting task. Thirty-five participants, aged 18 to 38 years, from the Reserve Officers’ Training Corp (ROTC) at the University of Maryland were recruited to take 30 shots at a bullseye target in three different experimental conditions. In the reward condition, $1 was added to their total balance for every 10-point shot. In the punishment condition, $1 was deducted from their total balance if they did not hit the 10-point area. In the neutral condition, no money was added or deducted from their total balance. When in the reward condition, which was reportedly most enjoyable and least stressful of the conditions, heart rate variability was found to be positively related to shooting scores, inversely related to variability in shooting performance and positively related to alpha power (i.e. less activation) in the left temporal region. In the punishment (and most stressful) condition, an increase in sympathetic response (i.e. increased LF/HF ratio) was positively related to jerking movements as well as variability of placement (on the target) in the shots taken. This, coupled with error monitoring activity in the anterior cingulate cortex, suggests evaluation of self-efficacy might be driving arousal regulation, thus affecting shooting performance. Better performers showed variable, increasing high-alpha power in the temporal region during the aiming period towards taking the shot which could indicate an adaptive strategy of engagement. They also showed lower coherence during hit shots than missed shots which was coupled with reduced jerking movements and better precision and accuracy. Frontal asymmetry measures revealed possible influence of the prefrontal lobe in driving this effect in reward and neutral conditions. The possible interactions, reasons behind these findings and implications are discussed.
Resumo:
Three-dimensional direct numerical simulations (DNS) have been performed on a finite-size hemispherecylinder model at angle of attack AoA = 20◦ and Reynolds numbers Re = 350 and 1000. Under these conditions, massive separation exists on the nose and lee-side of the cylinder, and at both Reynolds numbers the flow is found to be unsteady. Proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD) are employed in order to study the primary instability that triggers unsteadiness at Re = 350. The dominant coherent flow structures identified at the lower Reynolds number are also found to exist at Re = 1000; the question is then posed whether the flow oscillations and structures found at the two Reynolds numbers are related. POD and DMD computations are performed using different subdomains of the DNS computational domain. Besides reducing the computational cost of the analyses, this also permits to isolate spatially localized oscillatory structures from other, more energetic structures present in the flow. It is found that POD and DMD are in general sensitive to domain truncation and noneducated choices of the subdomain may lead to inconsistent results. Analyses at Re = 350 show that the primary instability is related to the counter rotating vortex pair conforming the three-dimensional afterbody wake, and characterized by the frequency St ≈ 0.11, in line with results in the literature. At Re = 1000, vortex-shedding is present in the wake with an associated broadband spectrum centered around the same frequency. The horn/leeward vortices at the cylinder lee-side, upstream of the cylinder base, also present finite amplitude oscillations at the higher Reynolds number. The spatial structure of these oscillations, described by the POD modes, is easily differentiated from that of the wake oscillations. Additionally, the frequency spectra associated with the lee-side vortices presents well defined peaks, corresponding to St ≈ 0.11 and its few harmonics, as opposed to the broadband spectrum found at the wake.
Resumo:
Over the past 30 years, unhealthy diets and lifestyles have increased the incidence of noncommunicable diseases and are culprits of diffusion on world’s population of syndromes as obesity or other metabolic disorders, reaching pandemic proportions. In order to comply with such scenario, the food industry has tackled these challenges with different approaches, as the reformulation of foods, fortification of foods, substitution of ingredients and supplements with healthier ingredients, reduced animal protein, reduced fats and improved fibres applications. Although the technological quality of these emerging food products is known, the impact they have on the gut microbiota of consumers remains unclear. In the present PhD thesis, the recipient work was conducted to study different foods with the substitution of the industrial and market components to that of novel green oriented and sustainable ingredients. So far, this thesis included eight representative case studies of the most common substitutions/additions/fortifications in dairy, meat, and vegetable products. The products studied were: (i) a set of breads fortified with polyphenol-rich olive fiber, to replace synthetic antioxidant and preservatives, (ii) a set of Gluten-free breads fortified with algae powder, to fortify the protein content of standard GF products, (iii) different formulations of salami where nitrates were replaced by ascorbic acid and vegetal extract antioxidants and nitrate-reducers starter cultures, (iv) chocolate fiber plus D-Limonene food supplement, as a novel prebiotic formula, (v) hemp seed bran and its alkalase hydrolysate, to introduce as a supplement, (vi) milk with and without lactose, to evaluate the different impact on human colonic microbiota of healthy or lactose-intolerants, (vii) lactose-free whey fermented and/or with probiotics added, to be introduced as an alternative beverage, exploring its impact on human colonic microbiota from healthy or lactose-intolerants, and (viii) antibiotics, to assess whether maternal amoxicillin affects the colon microbiota of piglets.