859 resultados para Analytical Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of short segments of noise-contaminated, multivariate real world data constitutes a challenge. In this paper we compare several techniques of analysis, which are supposed to correctly extract the amount of genuine cross-correlations from a multivariate data set. In order to test for the quality of their performance we derive time series from a linear test model, which allows the analytical derivation of genuine correlations. We compare the numerical estimates of the four measures with the analytical results for different correlation pattern. In the bivariate case all but one measure performs similarly well. However, in the multivariate case measures based on the eigenvalues of the equal-time cross-correlation matrix do not extract exclusively information about the amount of genuine correlations, but they rather reflect the spatial organization of the correlation pattern. This may lead to failures when interpreting the numerical results as illustrated by an application to three electroencephalographic recordings of three patients suffering from pharmacoresistent epilepsy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Is there a psychological basis for teaching and learning in the context of a liberal education, and if so, what might such a psychological basis look like? Traditional teaching and assessment often emphasize remembering facts and, to some extent, analyzing ideas. Such skills are important, but they leave out of the aspects of thinking that are most important not only in liberal education, but in life, in general. In this article, I propose a theory called WICS, which is an acronym for wisdom, intelligence, and creativity, synthesized. The basic idea underlying this theory is that, through liberal education, students need to acquire creative skills and attitudes to generate new ideas about how to adapt flexibly to a rapidly changing world, analytical skills and attitudes to ascertain whether these new ideas are good ones, practical skills and attitudes to implement the new ideas and convince others of their value, and wisdom-based skills and attitudes in order to ensure that the new ideas help to achieve a common good through the infusion of positive ethical values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wind energy has been one of the most growing sectors of the nation’s renewable energy portfolio for the past decade, and the same tendency is being projected for the upcoming years given the aggressive governmental policies for the reduction of fossil fuel dependency. Great technological expectation and outstanding commercial penetration has shown the so called Horizontal Axis Wind Turbines (HAWT) technologies. Given its great acceptance, size evolution of wind turbines over time has increased exponentially. However, safety and economical concerns have emerged as a result of the newly design tendencies for massive scale wind turbine structures presenting high slenderness ratios and complex shapes, typically located in remote areas (e.g. offshore wind farms). In this regard, safety operation requires not only having first-hand information regarding actual structural dynamic conditions under aerodynamic action, but also a deep understanding of the environmental factors in which these multibody rotating structures operate. Given the cyclo-stochastic patterns of the wind loading exerting pressure on a HAWT, a probabilistic framework is appropriate to characterize the risk of failure in terms of resistance and serviceability conditions, at any given time. Furthermore, sources of uncertainty such as material imperfections, buffeting and flutter, aeroelastic damping, gyroscopic effects, turbulence, among others, have pleaded for the use of a more sophisticated mathematical framework that could properly handle all these sources of indetermination. The attainable modeling complexity that arises as a result of these characterizations demands a data-driven experimental validation methodology to calibrate and corroborate the model. For this aim, System Identification (SI) techniques offer a spectrum of well-established numerical methods appropriated for stationary, deterministic, and data-driven numerical schemes, capable of predicting actual dynamic states (eigenrealizations) of traditional time-invariant dynamic systems. As a consequence, it is proposed a modified data-driven SI metric based on the so called Subspace Realization Theory, now adapted for stochastic non-stationary and timevarying systems, as is the case of HAWT’s complex aerodynamics. Simultaneously, this investigation explores the characterization of the turbine loading and response envelopes for critical failure modes of the structural components the wind turbine is made of. In the long run, both aerodynamic framework (theoretical model) and system identification (experimental model) will be merged in a numerical engine formulated as a search algorithm for model updating, also known as Adaptive Simulated Annealing (ASA) process. This iterative engine is based on a set of function minimizations computed by a metric called Modal Assurance Criterion (MAC). In summary, the Thesis is composed of four major parts: (1) development of an analytical aerodynamic framework that predicts interacted wind-structure stochastic loads on wind turbine components; (2) development of a novel tapered-swept-corved Spinning Finite Element (SFE) that includes dampedgyroscopic effects and axial-flexural-torsional coupling; (3) a novel data-driven structural health monitoring (SHM) algorithm via stochastic subspace identification methods; and (4) a numerical search (optimization) engine based on ASA and MAC capable of updating the SFE aerodynamic model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The general model The aim of this chapter is to introduce a structured overview of the different possibilities available to display and analyze brain electric scalp potentials. First, a general formal model of time-varying distributed EEG potentials is introduced. Based on this model, the most common analysis strategies used in EEG research are introduced and discussed as specific cases of this general model. Both the general model and particular methods are also expressed in mathematical terms. It is however not necessary to understand these terms to understand the chapter. The general model that we propose here is based on the statement made in Chapter 3, stating that the electric field produced by active neurons in the brain propagates in brain tissue without delay in time. Contrary to other imaging methods that are based on hemodynamic or metabolic processes, the EEG scalp potentials are thus “real-time,” not delayed and not a-priori frequency-filtered measurements. If only a single dipolar source in the brain were active, the temporal dynamics of the activity of that source would be exactly reproduced by the temporal dynamics observed in the scalp potentials produced by that source. This is illustrated in Figure 5.1, where the expected EEG signal of a single source with spindle-like dynamics in time has been computed. The dynamics of the scalp potentials exactly reproduce the dynamics of the source. The amplitude of the measured potentials depends on the relation between the location and orientation of the active source, its strength and the electrode position.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Energy efficiency has become an important research topic in intralogistics. Especially in this field the focus is placed on automated storage and retrieval systems (AS/RS) utilizing stacker cranes as these systems are widespread and consume a significant portion of the total energy demand of intralogistical systems. Numerical simulation models were developed to calculate the energy demand rather precisely for discrete single and dual command cycles. Unfortunately these simulation models are not suitable to perform fast calculations to determine a mean energy demand value of a complete storage aisle. For this purpose analytical approaches would be more convenient but until now analytical approaches only deliver results for certain configurations. In particular, for commonly used stacker cranes equipped with an intermediate circuit connection within their drive configuration there is no analytical approach available to calculate the mean energy demand. This article should address this research gap and present a calculation approach which enables planners to quickly calculate the energy demand of these systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). METHODS This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. RESULTS For 15 × 34, 5 × 5, and 2 × 2 cm(2) fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two-dimensional dose comparisons, the differences between calculations and measurements are generally within 2% of the maximal dose value or 2 mm DTA. CONCLUSIONS The results of the dose comparisons suggest that the developed beam model is suitable to accurately reconstruct photon MLC shaped electron beams for a Clinac 23EX and a TrueBeam linac. Hence, in future work the beam model will be utilized to investigate the possibilities of MERT using the photon MLC to shape electron beams.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper shows that optimal policy and consistent policy outcomes require the use of control-theory and game-theory solution techniques. While optimal policy and consistent policy often produce different outcomes even in a one-period model, we analyze consistent policy and its outcome in a simple model, finding that the cause of the inconsistency with optimal policy traces to inconsistent targets in the social loss function. As a result, the central bank should adopt a loss function that differs from the social loss function. Carefully designing the central bank s loss function with consistent targets can harmonize optimal and consistent policy. This desirable result emerges from two observations. First, the social loss function reflects a normative process that does not necessarily prove consistent with the structure of the microeconomy. Thus, the social loss function cannot serve as a direct loss function for the central bank. Second, an optimal loss function for the central bank must depend on the structure of that microeconomy. In addition, this paper shows that control theory provides a benchmark for institution design in a game-theoretical framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uveal melanoma is a rare but life-threatening form of ocular cancer. Contemporary treatment techniques include proton therapy, which enables conservation of the eye and its useful vision. Dose to the proximal structures is widely believed to play a role in treatment side effects, therefore, reliable dose estimates are required for properly evaluating the therapeutic value and complication risk of treatment plans. Unfortunately, current simplistic dose calculation algorithms can result in errors of up to 30% in the proximal region. In addition, they lack predictive methods for absolute dose per monitor unit (D/MU) values. ^ To facilitate more accurate dose predictions, a Monte Carlo model of an ocular proton nozzle was created and benchmarked against measured dose profiles to within ±3% or ±0.5 mm and D/MU values to within ±3%. The benchmarked Monte Carlo model was used to develop and validate a new broad beam dose algorithm that included the influence of edgescattered protons on the cross-field intensity profile, the effect of energy straggling in the distal portion of poly-energetic beams, and the proton fluence loss as a function of residual range. Generally, the analytical algorithm predicted relative dose distributions that were within ±3% or ±0.5 mm and absolute D/MU values that were within ±3% of Monte Carlo calculations. Slightly larger dose differences were observed at depths less than 7 mm, an effect attributed to the dose contributions of edge-scattered protons. Additional comparisons of Monte Carlo and broad beam dose predictions were made in a detailed eye model developed in this work, with generally similar findings. ^ Monte Carlo was shown to be an excellent predictor of the measured dose profiles and D/MU values and a valuable tool for developing and validating a broad beam dose algorithm for ocular proton therapy. The more detailed physics modeling by the Monte Carlo and broad beam dose algorithms represent an improvement in the accuracy of relative dose predictions over current techniques, and they provide absolute dose predictions. It is anticipated these improvements can be used to develop treatment strategies that reduce the incidence or severity of treatment complications by sparing normal tissue. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. To measure the demand for primary care and its associated factors by building and estimating a demand model of primary care in urban settings.^ Data source. Secondary data from 2005 California Health Interview Survey (CHIS 2005), a population-based random-digit dial telephone survey, conducted by the UCLA Center for Health Policy Research in collaboration with the California Department of Health Services, and the Public Health Institute between July 2005 and April 2006.^ Study design. A literature review was done to specify the demand model by identifying relevant predictors and indicators. CHIS 2005 data was utilized for demand estimation.^ Analytical methods. The probit regression was used to estimate the use/non-use equation and the negative binomial regression was applied to the utilization equation with the non-negative integer dependent variable.^ Results. The model included two equations in which the use/non-use equation explained the probability of making a doctor visit in the past twelve months, and the utilization equation estimated the demand for primary conditional on at least one visit. Among independent variables, wage rate and income did not affect the primary care demand whereas age had a negative effect on demand. People with college and graduate educational level were associated with 1.03 (p < 0.05) and 1.58 (p < 0.01) more visits, respectively, compared to those with no formal education. Insurance was significantly and positively related to the demand for primary care (p < 0.01). Need for care variables exhibited positive effects on demand (p < 0.01). Existence of chronic disease was associated with 0.63 more visits, disability status was associated with 1.05 more visits, and people with poor health status had 4.24 more visits than those with excellent health status. ^ Conclusions. The average probability of visiting doctors in the past twelve months was 85% and the average number of visits was 3.45. The study emphasized the importance of need variables in explaining healthcare utilization, as well as the impact of insurance, employment and education on demand. The two-equation model of decision-making, and the probit and negative binomial regression methods, was a useful approach to demand estimation for primary care in urban settings.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interaction effect is an important scientific interest for many areas of research. Common approach for investigating the interaction effect of two continuous covariates on a response variable is through a cross-product term in multiple linear regression. In epidemiological studies, the two-way analysis of variance (ANOVA) type of method has also been utilized to examine the interaction effect by replacing the continuous covariates with their discretized levels. However, the implications of model assumptions of either approach have not been examined and the statistical validation has only focused on the general method, not specifically for the interaction effect.^ In this dissertation, we investigated the validity of both approaches based on the mathematical assumptions for non-skewed data. We showed that linear regression may not be an appropriate model when the interaction effect exists because it implies a highly skewed distribution for the response variable. We also showed that the normality and constant variance assumptions required by ANOVA are not satisfied in the model where the continuous covariates are replaced with their discretized levels. Therefore, naïve application of ANOVA method may lead to an incorrect conclusion. ^ Given the problems identified above, we proposed a novel method modifying from the traditional ANOVA approach to rigorously evaluate the interaction effect. The analytical expression of the interaction effect was derived based on the conditional distribution of the response variable given the discretized continuous covariates. A testing procedure that combines the p-values from each level of the discretized covariates was developed to test the overall significance of the interaction effect. According to the simulation study, the proposed method is more powerful then the least squares regression and the ANOVA method in detecting the interaction effect when data comes from a trivariate normal distribution. The proposed method was applied to a dataset from the National Institute of Neurological Disorders and Stroke (NINDS) tissue plasminogen activator (t-PA) stroke trial, and baseline age-by-weight interaction effect was found significant in predicting the change from baseline in NIHSS at Month-3 among patients received t-PA therapy.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well established that orbital scale sea-level changes generated larger transport of sediments into the deep-sea during the last glacial maximum than the Holocene. However, the response of sedimentary processes to abrupt millennial-scale climate variability is rather unknown. Frequency of distal turbidites and amounts of advected detrital carbonate are estimated off the Lisbon-Setúbal canyons, within a chronostratigraphy based on radiometric ages, oxygen isotopes and paleomagnetic key global anomalies. We found that: 1) Higher frequency of turbidites concurred with Northern Hemisphere coldest temperatures (Greenland Stadials [GS], including Heinrich [H] events). But more than that, an escalating frequency of turbidites starts with the onset of global sea-level rising (and warming in Antarctica) and culminates during H events, at the time when rising is still in its early-mid stage, and the Atlantic Meridional Overturning Circulation (AMOC) is re-starting. This short time span coincides with maximum gradients of ocean surface and bottom temperatures between GS and Antarctic warmings (Antarctic Isotope Maximum; AIM 17, 14, 12, 8, 4, 2) and rapid sea-level rises. 2) Trigger of turbidity currents is not the only sedimentary process responding to millennial variability; land-detrital carbonate (with a very negative bulk d18O signature) enters the deep-sea by density-driven slope lateral advection, accordingly during GS. 3) Possible mechanisms to create slope instability on the Portuguese continental margin are sea-level variations as small as 20 m, and slope friction by rapid deep and intermediate re-accommodation of water masses circulation. 4) Common forcing mechanisms appear to drive slope instability at both millennial and orbital scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

State-of-the-art process-based models have shown to be applicable to the simulation and prediction of coastal morphodynamics. On annual to decadal temporal scales, these models may show limitations in reproducing complex natural morphological evolution patterns, such as the movement of bars and tidal channels, e.g. the observed decadal migration of the Medem Channel in the Elbe Estuary, German Bight. Here a morphodynamic model is shown to simulate the hydrodynamics and sediment budgets of the domain to some extent, but fails to adequately reproduce the pronounced channel migration, due to the insufficient implementation of bank erosion processes. In order to allow for long-term simulations of the domain, a nudging method has been introduced to update the model-predicted bathymetries with observations. The model-predicted bathymetry is nudged towards true states in annual time steps. Sensitivity analysis of a user-defined correlation length scale, for the definition of the background error covariance matrix during the nudging procedure, suggests that the optimal error correlation length is similar to the grid cell size, here 80-90 m. Additionally, spatially heterogeneous correlation lengths produce more realistic channel depths than do spatially homogeneous correlation lengths. Consecutive application of the nudging method compensates for the (stand-alone) model prediction errors and corrects the channel migration pattern, with a Brier skill score of 0.78. The proposed nudging method in this study serves as an analytical approach to update model predictions towards a predefined 'true' state for the spatiotemporal interpolation of incomplete morphological data in long-term simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present work a seismic retrofitting technique is proposed for masonry infilled reinforced concrete frames based on the replacement of infill panels by K-bracing with vertical shear link. The performance of this technique is evaluated through experimental tests. A simplified numerical model for structural damage evaluation is also formulated according to the notions and principles of continuum damage mechanics. The proposed model is calibrated with the experimental results. The experimental results have shown an excellent energy dissipation capacity with the proposed technique. Likewise, the numerical predictions with the proposed model are in good agreement with experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a numerical study on the instability of a brace-type seismic damper based on the out of plane yielding of the web of wide-flange steel sections (Web Plastifying Damper, WPD)The damper is intended to be installed in a framed structure as a standard diagonal brace. Under lateral forces, the damper is subjected to high axial forces, therefore its buckling instability is a matter of concern. Several finite element models representing WPDs with different axial stiffness and various geometries of their components were developed and analyzed taking into account both material and geometrical nonlinearities. The influence of several parameters defining the WPD in the load-displacement curve was examined. Furthermore, a simplified model to predict the buckling load is proposed.