964 resultados para Digital Modelling
Resumo:
Satellite-based Synthetic Aperture Radar (SAR) has proved useful for obtaining information on flood extent, which, when intersected with a Digital Elevation Model (DEM) of the floodplain, provides water level observations that can be assimilated into a hydrodynamic model to decrease forecast uncertainty. With an increasing number of operational satellites with SAR capability, information on the relationship between satellite first visit and revisit times and forecast performance is required to optimise the operational scheduling of satellite imagery. By using an Ensemble Transform Kalman Filter (ETKF) and a synthetic analysis with the 2D hydrodynamic model LISFLOOD-FP based on a real flooding case affecting an urban area (summer 2007,Tewkesbury, Southwest UK), we evaluate the sensitivity of the forecast performance to visit parameters. We emulate a generic hydrologic-hydrodynamic modelling cascade by imposing a bias and spatiotemporal correlations to the inflow error ensemble into the hydrodynamic domain. First, in agreement with previous research, estimation and correction for this bias leads to a clear improvement in keeping the forecast on track. Second, imagery obtained early in the flood is shown to have a large influence on forecast statistics. Revisit interval is most influential for early observations. The results are promising for the future of remote sensing-based water level observations for real-time flood forecasting in complex scenarios.
Resumo:
The UK government is mandating the use of building information modelling (BIM) in large public projects by 2016. As a result, engineering firms are faced with challenges related to embedding new technologies and associated working practices for the digital delivery of major infrastructure projects. Diffusion of innovations theory is used to investigate how digital innovations diffuse across complex firms. A contextualist approach is employed through an in-depth case study of a large, international engineering project-based firm. The analysis of the empirical data, which was collected over a four-year period of close interaction with the firm, reveals parallel paths of diffusion occurring across the firm, where both the innovation and the firm context were continually changing. The diffusion process is traced over three phases: centralization of technology management, standardization of digital working practices, and globalization of digital resources. The findings describe the diffusion of a digital innovation as multiple and partial within a complex social system during times of change and organizational uncertainty, thereby contributing to diffusion of innovations studies in construction by showing a range of activities and dynamics of a non-linear diffusion process.
Resumo:
Anaerobic threshold (AT) is usually estimated as a change point problem by visual analysis of the cardiorespiratory response to incremental dynamic exercise. In this study, two phase linear (TPL) models of the linear-linear and linear-quadratic type were used for the estimation of AT. The correlation coefficient between the classical and statistical approaches was 0.88, and 0.89 after outlier exclusion. The TPL models provide a simple method for estimating AT that can be easily implemented using a digital computer for the automatic pattern recognition of AT.
Resumo:
Modal analysis is widely approached in the classic theory of power systems modelling. This technique is also applied to model multiconductor transmission lines and their self and mutual electrical parameters. However, this methodology has some particularities and inaccuracies for specific applications, which are not clearly described in the technical literature. This study provides a brief review on modal decoupling applied in transmission line digital models and thereafter a novel and simplified computational routine is proposed to overcome the possible errors embedded by the modal decoupling in the simulation/ modelling computational algorithm. © The Institution of Engineering and Technology 2013.
Resumo:
The teaching/learning activities of the daylighting built environment require from the Architecture and Urbanism undergraduate student the ability to abstract the effects of daylight distributed in three-dimensional space that is being designed. Several tools and techniques can be used to facilitate the understanding of the involved phenomena, among which the computational simulation. This paper reports the digital inclusion of the daylighting teaching in the Architecture and Urbanism undergraduate course at the School of Architecture, Arts and Social Communication of Bauru (FAAC) of UNESP – Sao Paulo State University, that began in 2010. The inclusion process involved free software use, specifically the programs DIALux and SketchUp+Radiance, both with graphical output for the illuminated scenes visualization and for result analysis. The graphic model is converted from SketchUp to Radiance by a plugin and a user-friendly interface for Windows was developed to simulate the lighting. The process of digital inclusion is consolidated, with wide acceptance by students, for which computational simulation facilitates understanding of relation between daylight and built environment and helps the design process of elements for daylighting control.
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
In recent years, layered manufacturing (LM) processes have begun to progress from rapid prototyping techniques towards rapid manufacturing methods, where the objective is now to produce finished components for potential end use in a product (Caulfield et al., 2007). LM is especially promising for the fabrication of specific need, low volume products such as replacement parts for larger systems. This trend accentuates the need for a thorough understanding of the associated mechanical properties and the resulting behavior of parts produced by layered methods. Not only must the base material be durable, but the mechanical properties of the layered components must be sufficient to meet in-service loading and operational requirements, and be reasonably comparable to parts produced by more traditional manufacturing techniques. This chapter presents the details of a study completed to quantitatively analyze the potential of fused deposition modelling to fully evolve into a rapid manufacturing tool. The project objective is to develop an understanding of the dependence of the mechanical properties of FDM parts on raster orientation and to assess whether these parts are capable of maintaining their integrity while under service loading. The study examines the effect of fiber orientation, i.e. the direction of the polymer beads relative to the loading direction of the part, on a variety of important mechanical properties of ABS components fabricated by fused deposition modeling. Tensile, compressive, flexural, impact, and fatigue strength properties of FDM specimens are examined, evaluated, and placed in context in comparison with the properties of injection molded ABS parts.
Resumo:
Simulations of forest stand dynamics in a modelling framework including Forest Vegetation Simulator (FVS) are diameter driven, thus the diameter or basal area increment model needs a special attention. This dissertation critically evaluates diameter or basal area increment models and modelling approaches in the context of the Great Lakes region of the United States and Canada. A set of related studies are presented that critically evaluate the sub-model for change in individual tree basal diameter used in the Forest Vegetation Simulator (FVS), a dominant forestry model in the Great Lakes region. Various historical implementations of the STEMS (Stand and Tree Evaluation and Modeling System) family of diameter increment models, including the current public release of the Lake States variant of FVS (LS-FVS), were tested for the 30 most common tree species using data from the Michigan Forest Inventory and Analysis (FIA) program. The results showed that current public release of the LS-FVS diameter increment model over-predicts 10-year diameter increment by 17% on average. Also the study affirms that a simple adjustment factor as a function of a single predictor, dbh (diameter at breast height) used in the past versions, provides an inadequate correction of model prediction bias. In order to re-engineer the basal diameter increment model, the historical, conceptual and philosophical differences among the individual tree increment model families and their modelling approaches were analyzed and discussed. Two underlying conceptual approaches toward diameter or basal area increment modelling have been often used: the potential-modifier (POTMOD) and composite (COMP) approaches, which are exemplified by the STEMS/TWIGS and Prognosis models, respectively. It is argued that both approaches essentially use a similar base function and neither is conceptually different from a biological perspective, even though they look different in their model forms. No matter what modelling approach is used, the base function is the foundation of an increment model. Two base functions – gamma and Box-Lucas – were identified as candidate base functions for forestry applications. The results of a comparative analysis of empirical fits showed that quality of fit is essentially similar, and both are sufficiently detailed and flexible for forestry applications. The choice of either base function in order to model diameter or basal area increment is dependent upon personal preference; however, the gamma base function may be preferred over the Box-Lucas, as it fits the periodic increment data in both a linear and nonlinear composite model form. Finally, the utility of site index as a predictor variable has been criticized, as it has been widely used in models for complex, mixed species forest stands though not well suited for this purpose. An alternative to site index in an increment model was explored, using site index and a combination of climate variables and Forest Ecosystem Classification (FEC) ecosites and data from the Province of Ontario, Canada. The results showed that a combination of climate and FEC ecosites variables can replace site index in the diameter increment model.
Resumo:
The combustion strategy in a diesel engine has an impact on the emissions, fuel consumption and the exhaust temperatures. The PM mass retained in the CPF is a function of NO2 and PM concentrations in addition to the exhaust temperatures and the flow rates. Thus the engine combustion strategy affects exhaust characteristics which has an impact on the CPF operation and PM mass retained and oxidized. In this report, a process has been developed to simulate the relationship between engine calibration, performance and HC and PM oxidation in the DOC and CPF respectively. Fuel Rail Pressure (FRP) and Start of Injection (SOI) sweeps were carried out at five steady state engine operating conditions. This data, along with data from a previously carried out surrogate HD-FTP cycle [1], was used to create a transfer function model which estimates the engine out emissions, flow rates, temperatures for varied FRP and SOI over a transient cycle. Four different calibrations (test cases) were considered in this study, which were simulated through the transfer function model and the DOC model [1, 2]. The DOC outputs were then input into a model which simulates the NO2 assisted and thermal PM oxidation inside a CPF. Finally, results were analyzed as to how engine calibration impacts the engine fuel consumption, HC oxidation in the DOC and the PM oxidation in the CPF. Also, active regeneration for various test cases was simulated and a comparative analysis of the fuel penalties involved was carried out.
Resumo:
The combination of scaled analogue experiments, material mechanics, X-ray computed tomography (XRCT) and Digital Volume Correlation techniques (DVC) is a powerful new tool not only to examine the 3 dimensional structure and kinematic evolution of complex deformation structures in scaled analogue experiments, but also to fully quantify their spatial strain distribution and complete strain history. Digital image correlation (DIC) is an important advance in quantitative physical modelling and helps to understand non-linear deformation processes. Optical non-intrusive (DIC) techniques enable the quantification of localised and distributed deformation in analogue experiments based either on images taken through transparent sidewalls (2D DIC) or on surface views (3D DIC). X-ray computed tomography (XRCT) analysis permits the non-destructive visualisation of the internal structure and kinematic evolution of scaled analogue experiments simulating tectonic evolution of complex geological structures. The combination of XRCT sectional image data of analogue experiments with 2D DIC only allows quantification of 2D displacement and strain components in section direction. This completely omits the potential of CT experiments for full 3D strain analysis of complex, non-cylindrical deformation structures. In this study, we apply digital volume correlation (DVC) techniques on XRCT scan data of “solid” analogue experiments to fully quantify the internal displacement and strain in 3 dimensions over time. Our first results indicate that the application of DVC techniques on XRCT volume data can successfully be used to quantify the 3D spatial and temporal strain patterns inside analogue experiments. We demonstrate the potential of combining DVC techniques and XRCT volume imaging for 3D strain analysis of a contractional experiment simulating the development of a non-cylindrical pop-up structure. Furthermore, we discuss various options for optimisation of granular materials, pattern generation, and data acquisition for increased resolution and accuracy of the strain results. Three-dimensional strain analysis of analogue models is of particular interest for geological and seismic interpretations of complex, non-cylindrical geological structures. The volume strain data enable the analysis of the large-scale and small-scale strain history of geological structures.
Resumo:
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges.
Resumo:
These data are provided to allow users for reproducibility of an open source tool entitled 'automated Accumulation Threshold computation and RIparian Corridor delineation (ATRIC)'
Resumo:
Artículo sobre comunicaciones ferroviarias. Abstract: Along with the increase in operating frequencies in advanced radio communication systems utilised inside tunnels, the location of the break point is further and further away from the transmitter. This means that the near region lengthens considerably and even occupies the whole propagation cell or the entire length of some short tunnels. To begin with, this study analyses the propagation loss resulting from the free-space mechanism and the multi-mode waveguide mechanism in the near region of circular tunnels, respectively. Then, by conjunctive employing the propagation theory and the three-dimensional solid geometry, a general analytical model of the dividing point between two propagation mechanisms is presented for the first time. Moreover, the model is validated by a wide range of measurement campaigns in different tunnels at different frequencies. Finally, discussions on the simplified formulae of the dividing point in some application situations are made. The results in this study can be helpful to grasp the essence of the propagation mechanism inside tunnels.
Resumo:
A possible approach to the synchronization of chaotic circuits is reported. It is based on an Optically Programmable Logic Cell and as a consequence its output is digital, its application to cryptography in Optical Communications comes directly from its properties. The model here presented is based on a computer simulation.