853 resultados para Reduced physical models


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Schizophrenia is still associated with poor outcome, which is mainly related to negative symptoms, reduced physical activity and low quality of life. Physical activity can be objectively measured without distress using wrist actigraphy. The activity levels during the wake periods of the day have been informative on psychopathology and antipsychotic medication. Several studies demonstrated prominent negative symptoms to be associated with reduced activity levels with strongest correlations in chronic patients. Particularly, the avolition score is correlated with reduced activity levels. Moreover, activity levels differ between DSM-IV schizophrenia spectrum disorders and subtypes as well as between patients treated with olanzapine or risperidone. The longitudinal course of activity levels during an psychotic episode demonstrates considerable variance between subjects. During a psychotic episode patients with low activity levels at baseline experience an amelioration of negative symptoms. In contrast, patients with high activity levels at baseline have stable low negative syndrome scores. Between psychotic episodes less variance is observed. Actigraphy is easily applied in schizophrenia and allows collecting large amounts of crosssectional or longitudinal data. With larger numbers of subjects in controlled trials, continuous recording of activity would foster the detection of different outcome trajectories, which may prove as useful groups to target interventions. In clinical trials, activity monitoring may supplement and validate measures of the negative syndrome and its avolition factor or serve as an outcome marker for physical activity, which is important for metabolic issues and quality of life.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Some basic ideas are presented for the construction of robust, computationally efficient reduced order models amenable to be used in industrial environments, combined with somewhat rough computational fluid dynamics solvers. These ideas result from a critical review of the basic principles of proper orthogonal decomposition-based reduced order modeling of both steady and unsteady fluid flows. In particular, the extent to which some artifacts of the computational fluid dynamics solvers can be ignored is addressed, which opens up the possibility of obtaining quite flexible reduced order models. The methods are illustrated with the steady aerodynamic flow around a horizontal tail plane of a commercial aircraft in transonic conditions, and the unsteady lid-driven cavity problem. In both cases, the approximations are fairly good, thus reducing the computational cost by a significant factor.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A method is presented to construct computationally efficient reduced-order models (ROMs) of three-dimensional aerodynamic flows around commercial aircraft components. The method is based on the proper orthogonal decomposition (POD) of a set of steady snapshots, which are calculated using an industrial solver based on some Reynolds averaged Navier-Stokes (RANS) equations. The POD-mode amplitudes are calculated by minimizing a residual defined from the Euler equations, even though the snapshots themselves are calculated from viscous equations. This makes the ROM independent of the peculiarities of the solver used to calculate the snapshots. Also, both the POD modes and the residual are calculated using points in the computational mesh that are concentrated in a close vicinity of the aircraft, which constitute a much smaller number than the total number of mesh points. Despite these simplifications, the method provides quite good approximations of the flow variables distributions in the whole computational domain, including the boundary layer attached to the aircraft surface and the wake. Thus, the method is both robust and computationally efficient, which is checked considering the aerodynamic flow around a horizontal tail plane, in the transonic range 0.4?Mach number?0.8, ?3°?angle of attack?3°.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El rebase se define como el transporte de una cantidad importante de agua sobre la coronación de una estructura. Por tanto, es el fenómeno que, en general, determina la cota de coronación del dique dependiendo de la cantidad aceptable del mismo, a la vista de condicionantes funcionales y estructurales del dique. En general, la cantidad de rebase que puede tolerar un dique de abrigo desde el punto de vista de su integridad estructural es muy superior a la cantidad permisible desde el punto de vista de su funcionalidad. Por otro lado, el diseño de un dique con una probabilidad de rebase demasiado baja o nula conduciría a diseños incompatibles con consideraciones de otro tipo, como son las estéticas o las económicas. Existen distintas formas de estudiar el rebase producido por el oleaje sobre los espaldones de las obras marítimas. Las más habituales son los ensayos en modelo físico y las formulaciones empíricas o semi-empíricas. Las menos habituales son la instrumentación en prototipo, las redes neuronales y los modelos numéricos. Los ensayos en modelo físico son la herramienta más precisa y fiable para el estudio específico de cada caso, debido a la complejidad del proceso de rebase, con multitud de fenómenos físicos y parámetros involucrados. Los modelos físicos permiten conocer el comportamiento hidráulico y estructural del dique, identificando posibles fallos en el proyecto antes de su ejecución, evaluando diversas alternativas y todo esto con el consiguiente ahorro en costes de construcción mediante la aportación de mejoras al diseño inicial de la estructura. Sin embargo, presentan algunos inconvenientes derivados de los márgenes de error asociados a los ”efectos de escala y de modelo”. Las formulaciones empíricas o semi-empíricas presentan el inconveniente de que su uso está limitado por la aplicabilidad de las fórmulas, ya que éstas sólo son válidas para una casuística de condiciones ambientales y tipologías estructurales limitadas al rango de lo reproducido en los ensayos. El objetivo de la presente Tesis Doctoral es el contrate de las formulaciones desarrolladas por diferentes autores en materia de rebase en distintas tipologías de diques de abrigo. Para ello, se ha realizado en primer lugar la recopilación y el análisis de las formulaciones existentes para estimar la tasa de rebase sobre diques en talud y verticales. Posteriormente, se llevó a cabo el contraste de dichas formulaciones con los resultados obtenidos en una serie de ensayos realizados en el Centro de Estudios de Puertos y Costas. Para finalizar, se aplicó a los ensayos de diques en talud seleccionados la herramienta neuronal NN-OVERTOPPING2, desarrollada en el proyecto europeo de rebases CLASH (“Crest Level Assessment of Coastal Structures by Full Scale Monitoring, Neural Network Prediction and Hazard Analysis on Permissible Wave Overtopping”), contrastando de este modo la tasa de rebase obtenida en los ensayos con este otro método basado en la teoría de las redes neuronales. Posteriormente, se analizó la influencia del viento en el rebase. Para ello se han realizado una serie de ensayos en modelo físico a escala reducida, generando oleaje con y sin viento, sobre la sección vertical del Dique de Levante de Málaga. Finalmente, se presenta el análisis crítico del contraste de cada una de las formulaciones aplicadas a los ensayos seleccionados, que conduce a las conclusiones obtenidas en la presente Tesis Doctoral. Overtopping is defined as the volume of water surpassing the crest of a breakwater and reaching the sheltered area. This phenomenon determines the breakwater’s crest level, depending on the volume of water admissible at the rear because of the sheltered area’s functional and structural conditioning factors. The ways to assess overtopping processes range from those deemed to be most traditional, such as semi-empirical or empirical type equations and physical, reduced scale model tests, to others less usual such as the instrumentation of actual breakwaters (prototypes), artificial neural networks and numerical models. Determining overtopping in reduced scale physical model tests is simple but the values obtained are affected to a greater or lesser degree by the effects of a scale model-prototype such that it can only be considered as an approximation to what actually happens. Nevertheless, physical models are considered to be highly useful for estimating damage that may occur in the area sheltered by the breakwater. Therefore, although physical models present certain problems fundamentally deriving from scale effects, they are still the most accurate, reliable tool for the specific study of each case, especially when large sized models are adopted and wind is generated Empirical expressions obtained from laboratory tests have been developed for calculating the overtopping rate and, therefore, the formulas obtained obviously depend not only on environmental conditions – wave height, wave period and water level – but also on the model’s characteristics and are only applicable in a range of validity of the tests performed in each case. The purpose of this Thesis is to make a comparative analysis of methods for calculating overtopping rates developed by different authors for harbour breakwater overtopping. First, existing equations were compiled and analysed in order to estimate the overtopping rate on sloping and vertical breakwaters. These equations were then compared with the results obtained in a number of tests performed in the Centre for Port and Coastal Studies of the CEDEX. In addition, a neural network model developed in the European CLASH Project (“Crest Level Assessment of Coastal Structures by Full Scale Monitoring, Neural Network Prediction and Hazard Analysis on Permissible Wave Overtopping“) was also tested. Finally, the wind effects on overtopping are evaluated using tests performed with and without wind in the physical model of the Levante Breakwater (Málaga).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Based on the recent high-resolution laboratory experiments on propagating shear rupture, the constitutive law that governs shear rupture processes is discussed in view of the physical principles and constraints, and a specific constitutive law is proposed for shear rupture. It is demonstrated that nonuniform distributions of the constitutive law parameters on the fault are necessary for creating the nucleation process, which consists of two phases: (i) a stable, quasistatic phase, and (ii) the subsequent accelerating phase. Physical models of the breakdown zone and the nucleation zone are presented for shear rupture in the brittle regime. The constitutive law for shear rupture explicitly includes a scaling parameter Dc that enables one to give a common interpretation to both small scale rupture in the laboratory and large scale rupture as earthquake source in the Earth. Both the breakdown zone size Xc and the nucleation zone size L are prescribed and scaled by Dc, which in turn is prescribed by a characteristic length lambda c representing geometrical irregularities of the fault. The models presented here make it possible to understand the earthquake generation process from nucleation to unstable, dynamic rupture propagation in terms of physics. Since the nucleation process itself is an immediate earthquake precursor, deep understanding of the nucleation process in terms of physics is crucial for the short-term (or immediate) earthquake prediction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pennsylvania Department of Transportation, Harrisburg

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

90.00% 90.00%

Publicador:

Resumo:

On a global scale basalts from mid-ocean ridges are strikingly more homogeneous than basalts from intraplate volcanism. The observed geochemical heterogeneity argues strongly for the existence of distinct reservoirs in the Earth's mantle. It is an unresolved problem of Geodynamics as to how these findings can be reconciled with large-scale convection. We review observational constraints, and investigate stirring properties of numerical models of mantle convection. Conditions in the early Earth may have supported layered convection with rapid stirring in the upper layers. Material that has been altered near the surface is transported downwards by small-scale convection. Thereby a layer of homogeneous depleted material develops above pristine mantle. As the mantle cools over Earth history, the effects leading to layering become reduced and models show the large-scale convection favoured for the Earth today. Laterally averaged, the upper mantle below the lithosphere is least affected by material that has experienced near-surface differentiation. The geochemical signature obtained during the previous episode of small-scale convection may be preserved there for the longest time. Additionally, stirring is less effective in the high viscosity layer of the central lower mantle [1, 2], supporting the survival of medium-scale heterogeneities there. These models are the first, using 3-d spherical geometry and mostly Earth-like parameters, to address the suggested change of convective style. Although the models are still far from reproducing our planet, we find that proposal might be helpful towards reconciling geochemical and geophysical constraints.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Design verification in the digital domain, using model-based principles, is a key research objective to address the industrial requirement for reduced physical testing and prototyping. For complex assemblies, the verification of design and the associated production methods is currently fragmented, prolonged and sub-optimal, as it uses digital and physical verification stages that are deployed in a sequential manner using multiple systems. This paper describes a novel, hybrid design verification methodology that integrates model-based variability analysis with measurement data of assemblies, in order to reduce simulation uncertainty and allow early design verification from the perspective of satisfying key assembly criteria.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis focuses on digital equalization of nonlinear fiber impairments for coherent optical transmission systems. Building from well-known physical models of signal propagation in single-mode optical fibers, novel nonlinear equalization techniques are proposed, numerically assessed and experimentally demonstrated. The structure of the proposed algorithms is strongly driven by the optimization of the performance versus complexity tradeoff, envisioning the near-future practical application in commercial real-time transceivers. The work is initially focused on the mitigation of intra-channel nonlinear impairments relying on the concept of digital backpropagation (DBP) associated with Volterra-based filtering. After a comprehensive analysis of the third-order Volterra kernel, a set of critical simplifications are identified, culminating in the development of reduced complexity nonlinear equalization algorithms formulated both in time and frequency domains. The implementation complexity of the proposed techniques is analytically described in terms of computational effort and processing latency, by determining the number of real multiplications per processed sample and the number of serial multiplications, respectively. The equalization performance is numerically and experimentally assessed through bit error rate (BER) measurements. Finally, the problem of inter-channel nonlinear compensation is addressed within the context of 400 Gb/s (400G) superchannels for long-haul and ultra-long-haul transmission. Different superchannel configurations and nonlinear equalization strategies are experimentally assessed, demonstrating that inter-subcarrier nonlinear equalization can provide an enhanced signal reach while requiring only marginal added complexity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this thesis, a TCAD approach for the investigation of charge transport in amorphous silicon dioxide is presented for the first time. The proposed approach is used to investigate high-voltage silicon oxide thick TEOS capacitors embedded in the back-end inter-level dielectric layers for galvanic insulation applications. In the first part of this thesis, a detailed review of the main physical and chemical properties of silicon dioxide and the main physical models for the description of charge transport in insulators are presented. In the second part, the characterization of high-voltage MIM structures at different high-field stress conditions up to the breakdown is presented. The main physical mechanisms responsible of the observed results are then discussed in details. The third part is dedicated to the implementation of a TCAD approach capable of describing charge transport in silicon dioxide layers in order to gain insight into the microscopic physical mechanisms responsible of the leakage current in MIM structures. In particular, I investigated and modeled the role of charge injection at contacts and charge build-up due to trapping and de-trapping mechanisms in the oxide layer to the purpose of understanding its behavior under DC and AC stress conditions. In addition, oxide breakdown due to impact-ionization of carriers has been taken into account in order to have a complete representation of the oxide behavior at very high fields. Numerical simulations have been compared against experiments to quantitatively validate the proposed approach. In the last part of the thesis, the proposed approach has been applied to simulate the breakdown in realistic structures under different stress conditions. The TCAD tool has been used to carry out a detailed analysis of the most relevant physical quantities, in order to gain a detailed understanding on the main mechanisms responsible for breakdown and guide design optimization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper discusses the integrated design of parallel manipulators, which exhibit varying dynamics. This characteristic affects the machine stability and performance. The design methodology consists of four main steps: (i) the system modeling using flexible multibody technique, (ii) the synthesis of reduced-order models suitable for control design, (iii) the systematic flexible model-based input signal design, and (iv) the evaluation of some possible machine designs. The novelty in this methodology is to take structural flexibilities into consideration during the input signal design; therefore, enhancing the standard design process which mainly considers rigid bodies dynamics. The potential of the proposed strategy is exploited for the design evaluation of a two degree-of-freedom high-speed parallel manipulator. The results are experimentally validated. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An experimental testing program was undertaken to investigate failure mechanisms induced by the active movement of a deep rectangular trapdoor underlying a granular soil. Reduced-scale models were tested under normal gravity as well as under an increased gravitational field using a centrifuge facility. Some models were used to evaluate the performance of both flexible and rigid pipes undergoing a localized loss of support. Failure mechanisms in the longitudinal direction of the models were characterized by a single, well-defined failure surface that developed within the limits of the trapdoor. However, failure mechanisms in the transverse direction of the models were characterized by multiple failure surfaces extending outside the limits of the trapdoor. Significant dilation of the soil located immediately above the trapdoor was identified in the failure of the models. The pattern of the failure mechanisms was found to be affected by the stress level and backfill density. Higher stress levels were found to lead to well-developed failure zones. The influence of backfill density was found to be more relevant in models involving flexible pipes. Pipes embedded within loose backfill were severely damaged after loss of support, while pipes embedded in dense backfill experienced negligible deformations. These results indicate that damage to pipelines caused by ground loss of support can be significantly minimized by controlling the compaction of the fill.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Leaf wetness duration (LWD) is a key parameter in agricultural meteorology since it is related to epidemiology of many important crops, controlling pathogen infection and development rates. Because LWD is not widely measured, several methods have been developed to estimate it from weather data. Among the models used to estimate LWD, those that use physical principles of dew formation and dew and/or rain evaporation have shown good portability and sufficiently accurate results, but their complexity is a disadvantage for operational use. Alternatively, empirical models have been used despite their limitations. The simplest empirical models use only relative humidity data. The objective of this study was to evaluate the performance of three RH-based empirical models to estimate LWD in four regions around the world that have different climate conditions. Hourly LWD, air temperature, and relative humidity data were obtained from Ames, IA (USA), Elora, Ontario (Canada), Florence, Toscany (Italy), and Piracicaba, Sao Paulo State (Brazil). These data were used to evaluate the performance of the following empirical LWD estimation models: constant RH threshold (RH >= 90%); dew point depression (DPD); and extended RH threshold (EXT_RH). Different performance of the models was observed in the four locations. In Ames, Elora and Piracicaba, the RH >= 90% and DPD models underestimated LWD, whereas in Florence these methods overestimated LWD, especially for shorter wet periods. When the EXT_RH model was used, LWD was overestimated for all locations, with a significant increase in the errors. In general, the RH >= 90% model performed best, presenting the highest general fraction of correct estimates (F(C)), between 0.87 and 0.92, and the lowest false alarm ratio (F(AR)), between 0.02 and 0.31. The use of specific thresholds for each location improved accuracy of the RH model substantially, even when independent data were used; MAE ranged from 1.23 to 1.89 h, which is very similar to errors obtained with published physical models for LWD estimation. Based on these results, we concluded that, if calibrated locally, LWD can be estimated with acceptable accuracy by RH above a specific threshold, and that the EXT_RH method was unsuitable for estimating LWD at the locations used in this study. (C) 2007 Elsevier B.V. All rights reserved.