69 resultados para Digital elevation model (DEM)
Resumo:
Objective: This paper presents a detailed study of fractal-based methods for texture characterization of mammographic mass lesions and architectural distortion. The purpose of this study is to explore the use of fractal and lacunarity analysis for the characterization and classification of both tumor lesions and normal breast parenchyma in mammography. Materials and methods: We conducted comparative evaluations of five popular fractal dimension estimation methods for the characterization of the texture of mass lesions and architectural distortion. We applied the concept of lacunarity to the description of the spatial distribution of the pixel intensities in mammographic images. These methods were tested with a set of 57 breast masses and 60 normal breast parenchyma (dataset1), and with another set of 19 architectural distortions and 41 normal breast parenchyma (dataset2). Support vector machines (SVM) were used as a pattern classification method for tumor classification. Results: Experimental results showed that the fractal dimension of region of interest (ROIs) depicting mass lesions and architectural distortion was statistically significantly lower than that of normal breast parenchyma for all five methods. Receiver operating characteristic (ROC) analysis showed that fractional Brownian motion (FBM) method generated the highest area under ROC curve (A z = 0.839 for dataset1, 0.828 for dataset2, respectively) among five methods for both datasets. Lacunarity analysis showed that the ROIs depicting mass lesions and architectural distortion had higher lacunarities than those of ROIs depicting normal breast parenchyma. The combination of FBM fractal dimension and lacunarity yielded the highest A z value (0.903 and 0.875, respectively) than those based on single feature alone for both given datasets. The application of the SVM improved the performance of the fractal-based features in differentiating tumor lesions from normal breast parenchyma by generating higher A z value. Conclusion: FBM texture model is the most appropriate model for characterizing mammographic images due to self-affinity assumption of the method being a better approximation. Lacunarity is an effective counterpart measure of the fractal dimension in texture feature extraction in mammographic images. The classification results obtained in this work suggest that the SVM is an effective method with great potential for classification in mammographic image analysis.
Resumo:
People's interaction with the indoor environment plays a significant role in energy consumption in buildings. Mismatching and delaying occupants' feedback on the indoor environment to the building energy management system is the major barrier to the efficient energy management of buildings. There is an increasing trend towards the application of digital technology to support control systems in order to achieve energy efficiency in buildings. This article introduces a holistic, integrated, building energy management model called `smart sensor, optimum decision and intelligent control' (SMODIC). The model takes into account occupants' responses to the indoor environments in the control system. The model of optimal decision-making based on multiple criteria of indoor environments has been integrated into the whole system. The SMODIC model combines information technology and people centric concepts to achieve energy savings in buildings.
Resumo:
In this paper a new nonlinear digital baseband predistorter design is introduced based on direct learning, together with a new Wiener system modeling approach for the high power amplifiers (HPA) based on the B-spline neural network. The contribution is twofold. Firstly, by assuming that the nonlinearity in the HPA is mainly dependent on the input signal amplitude the complex valued nonlinear static function is represented by two real valued B-spline neural networks, one for the amplitude distortion and another for the phase shift. The Gauss-Newton algorithm is applied for the parameter estimation, in which the De Boor recursion is employed to calculate both the B-spline curve and the first order derivatives. Secondly, we derive the predistorter algorithm calculating the inverse of the complex valued nonlinear static function according to B-spline neural network based Wiener models. The inverse of the amplitude and phase shift distortion are then computed and compensated using the identified phase shift model. Numerical examples have been employed to demonstrate the efficacy of the proposed approaches.
Resumo:
In this paper we introduce a new Wiener system modeling approach for memory high power amplifiers in communication systems using observational input/output data. By assuming that the nonlinearity in the Wiener model is mainly dependent on the input signal amplitude, the complex valued nonlinear static function is represented by two real valued B-spline curves, one for the amplitude distortion and another for the phase shift, respectively. The Gauss-Newton algorithm is applied for the parameter estimation, which incorporates the De Boor algorithm, including both the B-spline curve and the first order derivatives recursion. An illustrative example is utilized to demonstrate the efficacy of the proposed approach.
Resumo:
What happens when digital coordination practices are introduced into the institutionalized setting of an engineering project? This question is addressed through an interpretive study that examines how a shared digital model becomes used in the late design stages of a major station refurbishment project. The paper contributes by mobilizing the idea of ‘hybrid practices’ to understand the diverse patterns of activity that emerge to manage digital coordination of design. It articulates how engineering and architecture professions develop different relationships with the shared model; the design team negotiates paper-based practices across organizational boundaries; and diverse practitioners probe the potential and limitations of the digital infrastructure. While different software packages and tools have become linked together into an integrated digital infrastructure, these emerging hybrid practices contrast with the interactions anticipated in practice and policy guidance and presenting new opportunities and challenges for managing project delivery. The study has implications for researchers working in the growing field of empirical work on engineering project organizations as it shows the importance of considering, and suggests new ways to theorise, the introduction of digital coordination practices into these institutionalized settings.
Resumo:
This article is a case study of how English teachers in England have coped with the paradigm shift from print to digital literacy. It reviews a large scale national initiative that was intended to upskill all teachers, considers its weak impact and explores the author’s involvement in the evaluation of the project’s direct value to English teachers. It explores how this latter evaluation revealed how best practice in English using ICT was developing in a variable manner. It then reports on a recent small scale research project that investigated how very good teachers have adapted ICT successfully into their teaching. It focuses on how the English teachers studied in the project are developing a powerful new pedagogy situated in the life worlds of their students and suggests that this model may be of benefit to many teachers. The issues this article reports on have resonance in all English speaking countries. This article is also a personal story of the author’s close involvement with ICT and English over 20 years, and provides evidence for his conviction that digital technologies will eventually transform English teaching.
Resumo:
This contribution introduces a new digital predistorter to compensate serious distortions caused by memory high power amplifiers (HPAs) which exhibit output saturation characteristics. The proposed design is based on direct learning using a data-driven B-spline Wiener system modeling approach. The nonlinear HPA with memory is first identified based on the B-spline neural network model using the Gauss-Newton algorithm, which incorporates the efficient De Boor algorithm with both B-spline curve and first derivative recursions. The estimated Wiener HPA model is then used to design the Hammerstein predistorter. In particular, the inverse of the amplitude distortion of the HPA's static nonlinearity can be calculated effectively using the Newton-Raphson formula based on the inverse of De Boor algorithm. A major advantage of this approach is that both the Wiener HPA identification and the Hammerstein predistorter inverse can be achieved very efficiently and accurately. Simulation results obtained are presented to demonstrate the effectiveness of this novel digital predistorter design.
Resumo:
How can organizations use digital infrastructure to realise physical outcomes? The design and construction of London Heathrow Terminal 5 is analysed to build new theoretical understanding of visualization and materialization practices in the transition from digital design to physical realisation. In the project studied, an integrated software solution is introduced as an infrastructure for delivery. The analyses articulate the work done to maintain this digital infrastructure and also to move designs beyond the closed world of the computer to a physical reality. In changing medium, engineers use heterogeneous trials to interrogate and address the limitations of an integrated digital model. The paper explains why such trials, which involve the reconciliation of digital and physical data through parallel and iterative forms of work, provide a robust practice for realizing goals that have physical outcomes. It argues that this practice is temporally different from, and at times in conflict with, building a comprehensive dataset within the digital medium. The paper concludes by discussing the implications for organizations that use digital infrastructures in seeking to accomplish goals in digital and physical media.
Resumo:
Microcontroller-based peak current mode control of a buck converter is investigated. The new solution uses a discrete time controller with digital slope compensation. This is implemented using only a single-chip microcontroller to achieve desirable cycle-by-cycle peak current limiting. The digital controller is implemented as a two-pole, two-zero linear difference equation designed using a continuous time model of the buck converter and a discrete time transform. Subharmonic oscillations are removed with digital slope compensation using a discrete staircase ramp. A 16 W hardware implementation directly compares analog and digital control. Frequency response measurements are taken and it is shown that the crossover frequency and expected phase margin of the digital control system match that of its analog counterpart.
Resumo:
This work proposes a method to objectively determine the most suitable analogue redesign method for forward type converters under digital voltage mode control. Particular emphasis is placed on determining the method which allows the highest phase margin at the particular switching and crossover frequencies chosen by the designer. It is shown that at high crossover frequencies with respect to switching frequency, controllers designed using backward integration have the largest phase margin; whereas at low crossover frequencies with respect to switching frequency, controllers designed using bilinear integration have the largest phase margins. An accurate model of the power stage is used for simulation, and experimental results from a Buck converter are collected. The performance of the digital controllers is compared to that of the equivalent analogue controller both in simulation and experiment. Excellent correlation between the simulation and experimental results is presented. This work will allow designers to confidently choose the analogue redesign method which yields the greater phase margin for their application.
Resumo:
We propose a new sparse model construction method aimed at maximizing a model’s generalisation capability for a large class of linear-in-the-parameters models. The coordinate descent optimization algorithm is employed with a modified l1- penalized least squares cost function in order to estimate a single parameter and its regularization parameter simultaneously based on the leave one out mean square error (LOOMSE). Our original contribution is to derive a closed form of optimal LOOMSE regularization parameter for a single term model, for which we show that the LOOMSE can be analytically computed without actually splitting the data set leading to a very simple parameter estimation method. We then integrate the new results within the coordinate descent optimization algorithm to update model parameters one at the time for linear-in-the-parameters models. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
As the calibration and evaluation of flood inundation models are a prerequisite for their successful application, there is a clear need to ensure that the performance measures that quantify how well models match the available observations are fit for purpose. This paper evaluates the binary pattern performance measures that are frequently used to compare flood inundation models with observations of flood extent. This evaluation considers whether these measures are able to calibrate and evaluate model predictions in a credible and consistent way, i.e. identifying the underlying model behaviour for a number of different purposes such as comparing models of floods of different magnitudes or on different catchments. Through theoretical examples, it is shown that the binary pattern measures are not consistent for floods of different sizes, such that for the same vertical error in water level, a model of a flood of large magnitude appears to perform better than a model of a smaller magnitude flood. Further, the commonly used Critical Success Index (usually referred to as F<2 >) is biased in favour of overprediction of the flood extent, and is also biased towards correctly predicting areas of the domain with smaller topographic gradients. Consequently, it is recommended that future studies consider carefully the implications of reporting conclusions using these performance measures. Additionally, future research should consider whether a more robust and consistent analysis could be achieved by using elevation comparison methods instead.
Resumo:
Analyses of simulations of the last glacial maximum (LGM) made with 17 atmospheric general circulation models (AGCMs) participating in the Paleoclimate Modelling Intercomparison Project, and a high-resolution (T106) version of one of the models (CCSR1), show that changes in the elevation of tropical snowlines (as estimated by the depression of the maximum altitude of the 0 °C isotherm) are primarily controlled by changes in sea-surface temperatures (SSTs). The correlation between the two variables, averaged for the tropics as a whole, is 95%, and remains >80% even at a regional scale. The reduction of tropical SSTs at the LGM results in a drier atmosphere and hence steeper lapse rates. Changes in atmospheric circulation patterns, particularly the weakening of the Asian monsoon system and related atmospheric humidity changes, amplify the reduction in snowline elevation in the northern tropics. Colder conditions over the tropical oceans combined with a weakened Asian monsoon could produce snowline lowering of up to 1000 m in certain regions, comparable to the changes shown by observations. Nevertheless, such large changes are not typical of all regions of the tropics. Analysis of the higher resolution CCSR1 simulation shows that differences between the free atmospheric and along-slope lapse rate can be large, and may provide an additional factor to explain regional variations in observed snowline changes.
Resumo:
Recent research into flood modelling has primarily concentrated on the simulation of inundation flow without considering the influences of channel morphology. River channels are often represented by a simplified geometry that is implicitly assumed to remain unchanged during flood simulations. However, field evidence demonstrates that significant morphological changes can occur during floods to mobilise the boundary sediments. Despite this, the effect of channel morphology on model results has been largely unexplored. To address this issue, the impact of channel cross-section geometry and channel long-profile variability on flood dynamics is examined using an ensemble of a 1D-2D hydraulic model (LISFLOOD-FP) of the 1:2102 year recurrence interval floods in Cockermouth, UK, within an uncertainty framework. A series of hypothetical scenarios of channel morphology were constructed based on a simple velocity based model of critical entrainment. A Monte-Carlo simulation framework was used to quantify the effects of channel morphology together with variations in the channel and floodplain roughness coefficients, grain size characteristics, and critical shear stress on measures of flood inundation. The results showed that the bed elevation modifications generated by the simplistic equations reflected a good approximation of the observed patterns of spatial erosion despite its overestimation of erosion depths. The effect of uncertainty on channel long-profile variability only affected the local flood dynamics and did not significantly affect the friction sensitivity and flood inundation mapping. The results imply that hydraulic models generally do not need to account for within event morphodynamic changes of the type and magnitude modelled, as these have a negligible impact that is smaller than other uncertainties, e.g. boundary conditions. Instead morphodynamic change needs to happen over a series of events to become large enough to change the hydrodynamics of floods in supply limited gravel-bed rivers like the one used in this research.