973 resultados para Model-driven engineering
Resumo:
Wind energy has been one of the most growing sectors of the nation’s renewable energy portfolio for the past decade, and the same tendency is being projected for the upcoming years given the aggressive governmental policies for the reduction of fossil fuel dependency. Great technological expectation and outstanding commercial penetration has shown the so called Horizontal Axis Wind Turbines (HAWT) technologies. Given its great acceptance, size evolution of wind turbines over time has increased exponentially. However, safety and economical concerns have emerged as a result of the newly design tendencies for massive scale wind turbine structures presenting high slenderness ratios and complex shapes, typically located in remote areas (e.g. offshore wind farms). In this regard, safety operation requires not only having first-hand information regarding actual structural dynamic conditions under aerodynamic action, but also a deep understanding of the environmental factors in which these multibody rotating structures operate. Given the cyclo-stochastic patterns of the wind loading exerting pressure on a HAWT, a probabilistic framework is appropriate to characterize the risk of failure in terms of resistance and serviceability conditions, at any given time. Furthermore, sources of uncertainty such as material imperfections, buffeting and flutter, aeroelastic damping, gyroscopic effects, turbulence, among others, have pleaded for the use of a more sophisticated mathematical framework that could properly handle all these sources of indetermination. The attainable modeling complexity that arises as a result of these characterizations demands a data-driven experimental validation methodology to calibrate and corroborate the model. For this aim, System Identification (SI) techniques offer a spectrum of well-established numerical methods appropriated for stationary, deterministic, and data-driven numerical schemes, capable of predicting actual dynamic states (eigenrealizations) of traditional time-invariant dynamic systems. As a consequence, it is proposed a modified data-driven SI metric based on the so called Subspace Realization Theory, now adapted for stochastic non-stationary and timevarying systems, as is the case of HAWT’s complex aerodynamics. Simultaneously, this investigation explores the characterization of the turbine loading and response envelopes for critical failure modes of the structural components the wind turbine is made of. In the long run, both aerodynamic framework (theoretical model) and system identification (experimental model) will be merged in a numerical engine formulated as a search algorithm for model updating, also known as Adaptive Simulated Annealing (ASA) process. This iterative engine is based on a set of function minimizations computed by a metric called Modal Assurance Criterion (MAC). In summary, the Thesis is composed of four major parts: (1) development of an analytical aerodynamic framework that predicts interacted wind-structure stochastic loads on wind turbine components; (2) development of a novel tapered-swept-corved Spinning Finite Element (SFE) that includes dampedgyroscopic effects and axial-flexural-torsional coupling; (3) a novel data-driven structural health monitoring (SHM) algorithm via stochastic subspace identification methods; and (4) a numerical search (optimization) engine based on ASA and MAC capable of updating the SFE aerodynamic model.
Resumo:
OBJECTIVES: To analyze computer-assisted diagnostics and virtual implant planning and to evaluate the indication for template-guided flapless surgery and immediate loading in the rehabilitation of the edentulous maxilla. MATERIALS AND METHODS: Forty patients with an edentulous maxilla were selected for this study. The three-dimensional analysis and virtual implant planning was performed with the NobelGuide software program (Nobel Biocare, Göteborg, Sweden). Prior to the computer tomography aesthetics and functional aspects were checked clinically. Either a well-fitting denture or an optimized prosthetic setup was used and then converted to a radiographic template. This allowed for a computer-guided analysis of the jaw together with the prosthesis. Accordingly, the best implant position was determined in relation to the bone structure and prospective tooth position. For all jaws, the hypothetical indication for (1) four implants with a bar overdenture and (2) six implants with a simple fixed prosthesis were planned. The planning of the optimized implant position was then analyzed as follows: the number of implants was calculated that could be placed in sufficient quantity of bone. Additional surgical procedures (guided bone regeneration, sinus floor elevation) that would be necessary due the reduced bone quality and quantity were identified. The indication of template-guided, flapless surgery or an immediate loaded protocol was evaluated. RESULTS: Model (a) - bar overdentures: for 28 patients (70%), all four implants could be placed in sufficient bone (total 112 implants). Thus, a full, flapless procedure could be suggested. For six patients (15%), sufficient bone was not available for any of their planned implants. The remaining six patients had exhibited a combination of sufficient or insufficient bone. Model (b) - simple fixed prosthesis: for 12 patients (30%), all six implants could be placed in sufficient bone (total 72 implants). Thus, a full, flapless procedure could be suggested. For seven patients (17%), sufficient bone was not available for any of their planned implants. The remaining 21 patients had exhibited a combination of sufficient or insufficient bone. DISCUSSION: In the maxilla, advanced atrophy is often observed, and implant placement becomes difficult or impossible. Thus, flapless surgery or an immediate loading protocol can be performed just in a selected number of patients. Nevertheless, the use of a computer program for prosthetically driven implant planning is highly efficient and safe. The three-dimensional view of the maxilla allows the determination of the best implant position, the optimization of the implant axis, and the definition of the best surgical and prosthetic solution for the patient. Thus, a protocol that combines a computer-guided technique with conventional surgical procedures becomes a promising option, which needs to be further evaluated and improved.
Resumo:
The performance of reanalysis-driven Canadian Regional Climate Model, version 5 (CRCM5) in reproducing the present climate over the North American COordinated Regional climate Downscaling EXperiment domain for the 1989–2008 period has been assessed in comparison with several observation-based datasets. The model reproduces satisfactorily the near-surface temperature and precipitation characteristics over most part of North America. Coastal and mountainous zones remain problematic: a cold bias (2–6 °C) prevails over Rocky Mountains in summertime and all year-round over Mexico; winter precipitation in mountainous coastal regions is overestimated. The precipitation patterns related to the North American Monsoon are well reproduced, except on its northern limit. The spatial and temporal structure of the Great Plains Low-Level Jet is well reproduced by the model; however, the night-time precipitation maximum in the jet area is underestimated. The performance of CRCM5 was assessed against earlier CRCM versions and other RCMs. CRCM5 is shown to have been substantially improved compared to CRCM3 and CRCM4 in terms of seasonal mean statistics, and to be comparable to other modern RCMs.
Resumo:
In this paper, we propose a new method for fully-automatic landmark detection and shape segmentation in X-ray images. To detect landmarks, we estimate the displacements from some randomly sampled image patches to the (unknown) landmark positions, and then we integrate these predictions via a voting scheme. Our key contribution is a new algorithm for estimating these displacements. Different from other methods where each image patch independently predicts its displacement, we jointly estimate the displacements from all patches together in a data driven way, by considering not only the training data but also geometric constraints on the test image. The displacements estimation is formulated as a convex optimization problem that can be solved efficiently. Finally, we use the sparse shape composition model as the a priori information to regularize the landmark positions and thus generate the segmented shape contour. We validate our method on X-ray image datasets of three different anatomical structures: complete femur, proximal femur and pelvis. Experiments show that our method is accurate and robust in landmark detection, and, combined with the shape model, gives a better or comparable performance in shape segmentation compared to state-of-the art methods. Finally, a preliminary study using CT data shows the extensibility of our method to 3D data.
Resumo:
Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.
Resumo:
In any physicochemical process in liquids, the dynamical response of the solvent to the solutes out of equilibrium plays a crucial role in the rates and products: the solvent molecules react to the changes in volume and electron density of the solutes to minimize the free energy of the solution, thus modulating the activation barriers and stabilizing (or destabilizing) intermediate states. In charge transfer (CT) processes in polar solvents, the response of the solvent always assists the formation of charge separation states by stabilizing the energy of the localized charges. A deep understanding of the solvation mechanisms and time scales is therefore essential for a correct description of any photochemical process in dense phase and for designing molecular devices based on photosensitizers with CT excited states. In the last two decades, with the advent of ultrafast time-resolved spectroscopies, microscopic models describing the relevant case of polar solvation (where both the solvent and the solute molecules have a permanent electric dipole and the mutual interaction is mainly dipole−dipole) have dramatically progressed. Regardless of the details of each model, they all assume that the effect of the electrostatic fields of the solvent molecules on the internal electronic dynamics of the solute are perturbative and that the solvent−solute coupling is mainly an electrostatic interaction between the constant permanent dipoles of the solute and the solvent molecules. This well-established picture has proven to quantitatively rationalize spectroscopic effects of environmental and electric dynamics (time-resolved Stokes shifts, inhomogeneous broadening, etc.). However, recent computational and experimental studies, including ours, have shown that further improvement is required. Indeed, in the last years we investigated several molecular complexes exhibiting photoexcited CT states, and we found that the current description of the formation and stabilization of CT states in an important group of molecules such as transition metal complexes is inaccurate. In particular, we proved that the solvent molecules are not just spectators of intramolecular electron density redistribution but significantly modulate it. Our results solicit further development of quantum mechanics computational methods to treat the solute and (at least) the closest solvent molecules including the nonperturbative treatment of the effects of local electrostatics and direct solvent−solute interactions to describe the dynamical changes of the solute excited states during the solvent response.
Resumo:
“Import content of exports”, based on Leontief’s demand-driven input-output model, has been widely used as an indicator to measure a country’s degree of participation in vertical specialisation trade. At a sectoral level, this indicator represents the share of inter-mediates imported by all sectors embodied in a given sector’s exported output. However, this indicator only reflects one aspect of vertical specialisation – the demand side. This paper discusses the possibility of using the input-output model developed by Ghosh to measure the vertical specialisation from the perspective of the supply side. At a sector level, the Ghosh type indicator measures the share of imported intermediates used in a sector’s production that are subsequently embodied in exports by all sectors. We estimate these two indicators of vertical specialisation for 47 selected economies for 1995, 2000, 2005 using the OECD’s harmonized input-output database. In addition, the potential biases of both indicators due to the treatment of net withdrawals in inventories, are also discussed.
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed
Resumo:
A dynamical model is proposed to describe the coupled decomposition and profile evolution of a free surfacefilm of a binary mixture. An example is a thin film of a polymer blend on a solid substrate undergoing simultaneous phase separation and dewetting. The model is based on model-H describing the coupled transport of the mass of one component (convective Cahn-Hilliard equation) and momentum (Navier-Stokes-Korteweg equations) supplemented by appropriate boundary conditions at the solid substrate and the free surface. General transport equations are derived using phenomenological nonequilibrium thermodynamics for a general nonisothermal setting taking into account Soret and Dufour effects and interfacial viscosity for the internal diffuse interface between the two components. Focusing on an isothermal setting the resulting model is compared to literature results and its base states corresponding to homogeneous or vertically stratified flat layers are analyzed.
Resumo:
Algebraic topology (homology) is used to analyze the state of spiral defect chaos in both laboratory experiments and numerical simulations of Rayleigh-Bénard convection. The analysis reveals topological asymmetries that arise when non-Boussinesq effects are present. The asymmetries are found in different flow fields in the simulations and are robust to substantial alterations to flow visualization conditions in the experiment. However, the asymmetries are not observable using conventional statistical measures. These results suggest homology may provide a new and general approach for connecting spatiotemporal observations of chaotic or turbulent patterns to theoretical models.