916 resultados para modeling and prediction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis starts showing the main characteristics and application fields of the AlGaN/GaN HEMT technology, focusing on reliability aspects essentially due to the presence of low frequency dispersive phenomena which limit in several ways the microwave performance of this kind of devices. Based on an equivalent voltage approach, a new low frequency device model is presented where the dynamic nonlinearity of the trapping effect is taken into account for the first time allowing considerable improvements in the prediction of very important quantities for the design of power amplifier such as power added efficiency, dissipated power and internal device temperature. An innovative and low-cost measurement setup for the characterization of the device under low-frequency large-amplitude sinusoidal excitation is also presented. This setup allows the identification of the new low frequency model through suitable procedures explained in detail. In this thesis a new non-invasive empirical method for compact electrothermal modeling and thermal resistance extraction is also described. The new contribution of the proposed approach concerns the non linear dependence of the channel temperature on the dissipated power. This is very important for GaN devices since they are capable of operating at relatively high temperatures with high power densities and the dependence of the thermal resistance on the temperature is quite relevant. Finally a novel method for the device thermal simulation is investigated: based on the analytical solution of the tree-dimensional heat equation, a Visual Basic program has been developed to estimate, in real time, the temperature distribution on the hottest surface of planar multilayer structures. The developed solver is particularly useful for peak temperature estimation at the design stage when critical decisions about circuit design and packaging have to be made. It facilitates the layout optimization and reliability improvement, allowing the correct choice of the device geometry and configuration to achieve the best possible thermal performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We use data from about 700 GPS stations in the EuroMediterranen region to investigate the present-day behavior of the the Calabrian subduction zone within the Mediterranean-scale plates kinematics and to perform local scale studies about the strain accumulation on active structures. We focus attenction on the Messina Straits and Crati Valley faults where GPS data show extentional velocity gradients of ∼3 mm/yr and ∼2 mm/yr, respectively. We use dislocation model and a non-linear constrained optimization algorithm to invert for fault geometric parameters and slip-rates and evaluate the associated uncertainties adopting a bootstrap approach. Our analysis suggest the presence of two partially locked normal faults. To investigate the impact of elastic strain contributes from other nearby active faults onto the observed velocity gradient we use a block modeling approach. Our models show that the inferred slip-rates on the two analyzed structures are strongly impacted by the assumed locking width of the Calabrian subduction thrust. In order to frame the observed local deformation features within the present- day central Mediterranean kinematics we realyze a statistical analysis testing the indipendent motion (w.r.t. the African and Eurasias plates) of the Adriatic, Cal- abrian and Sicilian blocks. Our preferred model confirms a microplate like behaviour for all the investigated blocks. Within these kinematic boundary conditions we fur- ther investigate the Calabrian Slab interface geometry using a combined approach of block modeling and χ2ν statistic. Almost no information is obtained using only the horizontal GPS velocities that prove to be a not sufficient dataset for a multi-parametric inversion approach. Trying to stronger constrain the slab geometry we estimate the predicted vertical velocities performing suites of forward models of elastic dislocations varying the fault locking depth. Comparison with the observed field suggest a maximum resolved locking depth of 25 km.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The determination of skeletal loading conditions in vivo and their relationship to the health of bone tissues, remain an open question. Computational modeling of the musculoskeletal system is the only practicable method providing a valuable approach to muscle and joint loading analyses, although crucial shortcomings limit the translation process of computational methods into the orthopedic and neurological practice. A growing attention focused on subject-specific modeling, particularly when pathological musculoskeletal conditions need to be studied. Nevertheless, subject-specific data cannot be always collected in the research and clinical practice, and there is a lack of efficient methods and frameworks for building models and incorporating them in simulations of motion. The overall aim of the present PhD thesis was to introduce improvements to the state-of-the-art musculoskeletal modeling for the prediction of physiological muscle and joint loads during motion. A threefold goal was articulated as follows: (i) develop state-of-the art subject-specific models and analyze skeletal load predictions; (ii) analyze the sensitivity of model predictions to relevant musculotendon model parameters and kinematic uncertainties; (iii) design an efficient software framework simplifying the effort-intensive phases of subject-specific modeling pre-processing. The first goal underlined the relevance of subject-specific musculoskeletal modeling to determine physiological skeletal loads during gait, corroborating the choice of full subject-specific modeling for the analyses of pathological conditions. The second goal characterized the sensitivity of skeletal load predictions to major musculotendon parameters and kinematic uncertainties, and robust probabilistic methods were applied for methodological and clinical purposes. The last goal created an efficient software framework for subject-specific modeling and simulation, which is practical, user friendly and effort effective. Future research development aims at the implementation of more accurate models describing lower-limb joint mechanics and musculotendon paths, and the assessment of an overall scenario of the crucial model parameters affecting the skeletal load predictions through probabilistic modeling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is a research B for the University of Bologna. The course is the civil engineering LAUREA MAGISTRALE at UNIBO. The main purpose of this research is to promote another way of explaining, analyzing and presenting some civil engineering aspects to the students worldwide by theory, modeling and photos. The basic idea is divided into three steps. The first one is to present and analyze the theoretical parts. So a detailed analysis of the theory combined with theorems, explanations, examples and exercises will cover this step. At the second, a model will make clear all these parts that were discussed in the theory by showing how the structures work or fail. The modeling is able to present the behavior of many elements, in scale which we use in the real structures. After these two steps an interesting exhibition of photos from the real world with comments will give the chance to the engineers to observe all these theoretical and modeling-laboratory staff in many different cases. For example many civil engineers in the world may know about the air pressure on the structures but many of them have never seen the extraordinary behavior of the bridge of Tacoma ‘dancing with the air’. At this point I would like to say that what I have done is not a book, but a research of how this ‘3 step’ presentation or explanation of some mechanical characteristics could be helpful. I know that my research is something different and new and in my opinion is very important because it helps students to go deeper in the science and also gives new ideas and inspirations. This way of teaching can be used at all lessons especially at the technical. Hope that one day all the books will adopt this kind of presentation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goals of the present study were to model the population kinetics of in vivo influx and efflux processes of grepafloxacin at the serum-cerebrospinal fluid (CSF) barrier and to propose a simulation-based approach to optimize the design of dose-finding trials in the meningitis rabbit model. Twenty-nine rabbits with pneumococcal meningitis receiving grepafloxacin at 15 mg/kg of body weight (intravenous administration at 0 h), 30 mg/kg (at 0 h), or 50 mg/kg twice (at 0 and 4 h) were studied. A three-compartment population pharmacokinetic model was fit to the data with the program NONMEM (Nonlinear Mixed Effects Modeling). Passive diffusion clearance (CL(diff)) and active efflux clearance (CL(active)) are transfer kinetic modeling parameters. Influx clearance is assumed to be equal to CL(diff), and efflux clearance is the sum of CL(diff), CL(active), and bulk flow clearance (CL(bulk)). The average influx clearance for the population was 0.0055 ml/min (interindividual variability, 17%). Passive diffusion clearance was greater in rabbits receiving grepafloxacin at 15 mg/kg than in those treated with higher doses (0.0088 versus 0.0034 ml/min). Assuming a CL(bulk) of 0.01 ml/min, CL(active) was estimated to be 0.017 ml/min (11%), and clearance by total efflux was estimated to be 0.032 ml/min. The population kinetic model allows not only to quantify in vivo efflux and influx mechanisms at the serum-CSF barrier but also to analyze the effects of different dose regimens on transfer kinetic parameters in the rabbit meningitis model. The modeling-based approach also provides a tool for the simulation and prediction of various outcomes in which researchers might be interested, which is of great potential in designing dose-finding trials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When observers are presented with two visual targets appearing in the same position in close temporal proximity, a marked reduction in detection performance of the second target has often been reported, the so-called attentional blink phenomenon. Several studies found a similar decrement of P300 amplitudes during the attentional blink period as observed with detection performances of the second target. However, whether the parallel courses of second target performances and corresponding P300 amplitudes resulted from the same underlying mechanisms remained unclear. The aim of our study was therefore to investigate whether the mechanisms underlying the AB can be assessed by fixed-links modeling and whether this kind of assessment would reveal the same or at least related processes in the behavioral and electrophysiological data. On both levels of observation three highly similar processes could be identified: an increasing, a decreasing and a u-shaped trend. Corresponding processes from the behavioral and electrophysiological data were substantially correlated, with the two u-shaped trends showing the strongest association with each other. Our results provide evidence for the assumption that the same mechanisms underlie attentional blink task performance at the electrophysiological and behavioral levels as assessed by fixed-links models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

At first sight, experimenting and modeling form two distinct modes of scientific inquiry. This spurs philosophical debates about how the distinction should be drawn (e.g. Morgan 2005, Winsberg 2009, Parker 2009). But much scientific practice casts serious doubts on the idea that the distinction makes much sense. There are two worries. First, the practices of modeling and experimenting are often intertwined in intricate ways because much modeling involves experimenting, and the interpretation of many experiments relies upon models. Second, there are borderline cases that seem to blur the distinction between experiment and model (if there is any). My talk tries to defend the philosophical project of distinguishing models from experiment and to advance the related philosophical debate. I begin with providing a minimalist framework of conceptualizing experimenting and modeling and their mutual relationships. The methods are conceptualized as different types of activities that are characterized by a primary goal, respectively. The minimalist framwork, which should be uncontroversial, suffices to accommodate the first worry. I address the second worry by suggesting several ways how to conceptualize the distinction in a more flexible way. I make a concrete suggestion of how the distinction may be drawn. I use examples from the history of science to argue my case. The talk concentrates and models and experiments, but I will comment on simulations too.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

State-of-the-art process-based models have shown to be applicable to the simulation and prediction of coastal morphodynamics. On annual to decadal temporal scales, these models may show limitations in reproducing complex natural morphological evolution patterns, such as the movement of bars and tidal channels, e.g. the observed decadal migration of the Medem Channel in the Elbe Estuary, German Bight. Here a morphodynamic model is shown to simulate the hydrodynamics and sediment budgets of the domain to some extent, but fails to adequately reproduce the pronounced channel migration, due to the insufficient implementation of bank erosion processes. In order to allow for long-term simulations of the domain, a nudging method has been introduced to update the model-predicted bathymetries with observations. The model-predicted bathymetry is nudged towards true states in annual time steps. Sensitivity analysis of a user-defined correlation length scale, for the definition of the background error covariance matrix during the nudging procedure, suggests that the optimal error correlation length is similar to the grid cell size, here 80-90 m. Additionally, spatially heterogeneous correlation lengths produce more realistic channel depths than do spatially homogeneous correlation lengths. Consecutive application of the nudging method compensates for the (stand-alone) model prediction errors and corrects the channel migration pattern, with a Brier skill score of 0.78. The proposed nudging method in this study serves as an analytical approach to update model predictions towards a predefined 'true' state for the spatiotemporal interpolation of incomplete morphological data in long-term simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, vision-based advanced driver-assistance systems (ADAS) have received a new increased interest to enhance driving safety. In particular, due to its high performance–cost ratio, mono-camera systems are arising as the main focus of this field of work. In this paper we present a novel on-board road modeling and vehicle detection system, which is a part of the result of the European I-WAY project. The system relies on a robust estimation of the perspective of the scene, which adapts to the dynamics of the vehicle and generates a stabilized rectified image of the road plane. This rectified plane is used by a recursive Bayesian classi- fier, which classifies pixels as belonging to different classes corresponding to the elements of interest of the scenario. This stage works as an intermediate layer that isolates subsequent modules since it absorbs the inherent variability of the scene. The system has been tested on-road, in different scenarios, including varied illumination and adverse weather conditions, and the results have been proved to be remarkable even for such complex scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wake effect represents one of the most important aspects to be analyzed at the engineering phase of every wind farm since it supposes an important power deficit and an increase of turbulence levels with the consequent decrease of the lifetime. It depends on the wind farm design, wind turbine type and the atmospheric conditions prevailing at the site. Traditionally industry has used analytical models, quick and robust, which allow carry out at the preliminary stages wind farm engineering in a flexible way. However, new models based on Computational Fluid Dynamics (CFD) are needed. These models must increase the accuracy of the output variables avoiding at the same time an increase in the computational time. Among them, the elliptic models based on the actuator disk technique have reached an extended use during the last years. These models present three important problems in case of being used by default for the solution of large wind farms: the estimation of the reference wind speed upstream of each rotor disk, turbulence modeling and computational time. In order to minimize the consequence of these problems, this PhD Thesis proposes solutions implemented under the open source CFD solver OpenFOAM and adapted for each type of site: a correction on the reference wind speed for the general elliptic models, the semi-parabollic model for large offshore wind farms and the hybrid model for wind farms in complex terrain. All the models are validated in terms of power ratios by means of experimental data derived from real operating wind farms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sandwich panels of laminated gypsum and rock wool have shown large pathology of cracking due to excessive slabs deflection. Currently the most widespread use of this material is as vertical elements of division or partition, with no structural function, what justifies that there are no studies on the mechanism of fracture and mechanical properties related to it. Therefore, and in order to reduce the cracking problem, it is necessary to progress in the simulation and prediction of the behaviour under tensile and shear load of such panels, although in typical applications have no structural responsability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Progress in homology modeling and protein design has generated considerable interest in methods for predicting side-chain packing in the hydrophobic cores of proteins. Present techniques are not practically useful, however, because they are unable to model protein main-chain flexibility. Parameterization of backbone motions may represent a general and efficient method to incorporate backbone relaxation into such fixed main-chain models. To test this notion, we introduce a method for treating explicitly the backbone motions of alpha-helical bundles based on an algebraic parameterization proposed by Francis Crick in 1953 [Crick, F. H. C. (1953) Acta Crystallogr. 6, 685-689]. Given only the core amino acid sequence, a simple calculation can rapidly reproduce the crystallographic main-chain and core side-chain structures of three coiled coils (one dimer, one trimer, and one tetramer) to within 0.6-A root-mean-square deviations. The speed of the predictive method [approximately 3 min per rotamer choice on a Silicon Graphics (Mountain View, CA) 4D/35 computer] permits it to be used as a design tool.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ecologists and economists both use models to help develop strategies for biodiversity management. The practical use of disciplinary models, however, can be limited because ecological models tend not to address the socioeconomic dimension of biodiversity management, whereas economic models tend to neglect the ecological dimension. Given these shortcomings of disciplinary models, there is a necessity to combine ecological and economic knowledge into ecological-economic models. It is insufficient if scientists work separately in their own disciplines and combine their knowledge only when it comes to formulating management recommendations. Such an approach does not capture feedback loops between the ecological and the socioeconomic systems. Furthermore, each discipline poses the management problem in its own way and comes up with its own most appropriate solution. These disciplinary solutions, however are likely to be so different that a combined solution considering aspects of both disciplines cannot be found. Preconditions for a successful model-based integration of ecology and economics include (1) an in-depth knowledge of the two disciplines, (2) the adequate identification and framing of the problem to be investigated, and (3) a common understanding between economists and ecologists of modeling and scale. To further advance ecological-economic modeling the development of common benchmarks, quality controls, and refereeing standards for ecological-economic models is desirable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two types of prediction problem can be solved using a regression line viz., prediction of the ‘population’ regression line at the point ‘x’ and prediction of an ‘individual’ new member of the population ‘y1’ for which ‘x1’ has been measured. The second problem is probably the most commonly encountered and the most relevant to calibration studies. A regression line is likely to be most useful for calibration if the range of values of the X variable is large, if there is a good representation of the ‘x,y’ values across the range of X, and if several estimates of ‘y’ are made at each ‘x’. It is poor statistical practice to use a regression line for calibration or prediction beyond the limits of the data.