160 resultados para Expectation-maximation algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present an efficient strategy for mapping out the classical phase behavior of block copolymer systems using self-consistent field theory (SCFT). With our new algorithm, the complete solution of a classical block copolymer phase can be evaluated typically in a fraction of a second on a single-processor computer, even for highly segregated melts. This is accomplished by implementing the standard unit-cell approximation (UCA) for the cylindrical and spherical phases, and solving the resulting equations using a Bessel function expansion. Here the method is used to investigate blends of AB diblock copolymer and A homopolymer, concentrating on the situation where the two molecules are of similar size.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aerosol component of the Oxford-Rutherford Aerosol and Cloud (ORAC) combined cloud and aerosol retrieval scheme is described and the theoretical performance of the algorithm is analysed. ORAC is an optimal estimation retrieval scheme for deriving cloud and aerosol properties from measurements made by imaging satellite radiometers and, when applied to cloud free radiances, provides estimates of aerosol optical depth at a wavelength of 550 nm, aerosol effective radius and surface reflectance at 550 nm. The aerosol retrieval component of ORAC has several incarnations – this paper addresses the version which operates in conjunction with the cloud retrieval component of ORAC (described by Watts et al., 1998), as applied in producing the Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) data-set.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents and assesses an algorithm that constructs 3D distributions of cloud from passive satellite imagery and collocated 2D nadir profiles of cloud properties inferred synergistically from lidar, cloud radar and imager data. It effectively widens the active–passive retrieved cross-section (RXS) of cloud properties, thereby enabling computation of radiative fluxes and radiances that can be compared with measured values in an attempt to perform radiative closure experiments that aim to assess the RXS. For this introductory study, A-train data were used to verify the scene-construction algorithm and only 1D radiative transfer calculations were performed. The construction algorithm fills off-RXS recipient pixels by computing sums of squared differences (a cost function F) between their spectral radiances and those of potential donor pixels/columns on the RXS. Of the RXS pixels with F lower than a certain value, the one with the smallest Euclidean distance to the recipient pixel is designated as the donor, and its retrieved cloud properties and other attributes such as 1D radiative heating rates are consigned to the recipient. It is shown that both the RXS itself and Moderate Resolution Imaging Spectroradiometer (MODIS) imagery can be reconstructed extremely well using just visible and thermal infrared channels. Suitable donors usually lie within 10 km of the recipient. RXSs and their associated radiative heating profiles are reconstructed best for extensive planar clouds and less reliably for broken convective clouds. Domain-average 1D broadband radiative fluxes at the top of theatmosphere(TOA)for (21 km)2 domains constructed from MODIS, CloudSat andCloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) data agree well with coincidental values derived from Clouds and the Earth’s Radiant Energy System (CERES) radiances: differences betweenmodelled and measured reflected shortwave fluxes are within±10Wm−2 for∼35% of the several hundred domains constructed for eight orbits. Correspondingly, for outgoing longwave radiation∼65% are within ±10Wm−2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Expectations of future market conditions are generally acknowledged to be crucial for the development decision and hence for shaping the built environment. This empirical study of the Central London office market from 1987 to 2009 tests for evidence of adaptive and naive expectations. Applying VAR models and a recursive OLS regression with one-step forecasts, we find evidence of adaptive and naïve, rather than rational expectations of developers. Although the magnitude of the errors and the length of time lags vary over time and development cycles, the results confirm that developers’ decisions are explained to a large extent by contemporaneous and past conditions in both London submarkets. The corollary of this finding is that developers may be able to generate excess profits by exploiting market inefficiencies but this may be hindered in practice by the long periods necessary for planning and construction of the asset. More generally, the results of this study suggest that real estate cycles are largely generated endogenously rather than being the result of unexpected exogenous shocks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The relationship between valuations and the subsequent sale price continues to be a matter of both theoretical and practical interest. This paper reports the analysis of over 700 property sales made during the 1974/90 period. Initial results imply an average under-valuation of 7% and a standard error of 18% across the sample. A number of techniques are applied to the data set using other variables such as the region, the type of property and the return from the market to explain the difference between the valuation and the subsequent sale price. The analysis reduces the unexplained error; the bias is fully accounted for and the standard error is reduced to 15.3%. This model finds that about 6% of valuations over-estimated the sale price by more than 20% and about 9% of the valuations under-estimated the sale prices by more than 20%. The results suggest that valuations are marginally more accurate than might be expected, both from consideration of theoretical considerations and from comparison with the equivalent valuation in equity markets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the implications of the shifting cultural significance of public open space in urban areas. In particular, it focuses on the increasing dysfunction between people's expectations of that space and its actual provision and management. In doing so, the paper applies Lefebvre's ideas of spatiality to the evident paradigm shift from 'public' to 'private' culture, with its associated commodification of previously public space. While developing the construct of paradigm shift, the paper recognises that the former political notions inherent in the provision of public space remain in evidence. So whereas public parks were formerly seen as spaces of confrontation between the 'rationality' of public order as the 'irrationality' of individual leisure pursuits, they are now increasingly seen, particularly 'out of hours', as the domain of the dispossessed, to be defined and policed as 'dangerous'. Where once people were welcomed into public open spaces as a means of 'educating' them in good, acceptable, leisure practices, therefore, they are now increasingly excluded, but for the same ostensible reasons. Building on survey work undertaken in Reading, Berkshire, the paper illustrates how communities can become separated from 'their' space, leaving them with the overriding impression that they have been 'short-changed' in terms of both the provision and the management of urban open space. Rather than the intimacy of local space for local people, therefore, the paper argues that parks have become externalised places, increasingly responding to commercial definitions of culture and what is 'public'. Central urban open spaces are therefore increasingly becoming sites of stratification, signification of a consumer-constructed citizenship and valorisation of public life as a legitimate element of the market surface of town and city centres.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent research has shown that Lighthill–Ford spontaneous gravity wave generation theory, when applied to numerical model data, can help predict areas of clear-air turbulence. It is hypothesized that this is the case because spontaneously generated atmospheric gravity waves may initiate turbulence by locally modifying the stability and wind shear. As an improvement on the original research, this paper describes the creation of an ‘operational’ algorithm (ULTURB) with three modifications to the original method: (1) extending the altitude range for which the method is effective downward to the top of the boundary layer, (2) adding turbulent kinetic energy production from the environment to the locally produced turbulent kinetic energy production, and, (3) transforming turbulent kinetic energy dissipation to eddy dissipation rate, the turbulence metric becoming the worldwide ‘standard’. In a comparison of ULTURB with the original method and with the Graphical Turbulence Guidance second version (GTG2) automated procedure for forecasting mid- and upper-level aircraft turbulence ULTURB performed better for all turbulence intensities. Since ULTURB, unlike GTG2, is founded on a self-consistent dynamical theory, it may offer forecasters better insight into the causes of the clear-air turbulence and may ultimately enhance its predictability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose an efficient two-level model identification method for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularization parameters in the elastic net are optimized using a particle swarm optimization (PSO) algorithm at the upper level by minimizing the leave one out (LOO) mean square error (LOOMSE). Illustrative examples are included to demonstrate the effectiveness of the new approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in hardware and software in the past decade allow to capture, record and process fast data streams at a large scale. The research area of data stream mining has emerged as a consequence from these advances in order to cope with the real time analysis of potentially large and changing data streams. Examples of data streams include Google searches, credit card transactions, telemetric data and data of continuous chemical production processes. In some cases the data can be processed in batches by traditional data mining approaches. However, in some applications it is required to analyse the data in real time as soon as it is being captured. Such cases are for example if the data stream is infinite, fast changing, or simply too large in size to be stored. One of the most important data mining techniques on data streams is classification. This involves training the classifier on the data stream in real time and adapting it to concept drifts. Most data stream classifiers are based on decision trees. However, it is well known in the data mining community that there is no single optimal algorithm. An algorithm may work well on one or several datasets but badly on others. This paper introduces eRules, a new rule based adaptive classifier for data streams, based on an evolving set of Rules. eRules induces a set of rules that is constantly evaluated and adapted to changes in the data stream by adding new and removing old rules. It is different from the more popular decision tree based classifiers as it tends to leave data instances rather unclassified than forcing a classification that could be wrong. The ongoing development of eRules aims to improve its accuracy further through dynamic parameter setting which will also address the problem of changing feature domain values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A neurofuzzy classifier identification algorithm is introduced for two class problems. The initial fuzzy base construction is based on fuzzy clustering utilizing a Gaussian mixture model (GMM) and the analysis of covariance (ANOVA) decomposition. The expectation maximization (EM) algorithm is applied to determine the parameters of the fuzzy membership functions. Then neurofuzzy model is identified via the supervised subspace orthogonal least square (OLS) algorithm. Finally a logistic regression model is applied to produce the class probability. The effectiveness of the proposed neurofuzzy classifier has been demonstrated using a real data set.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This contribution introduces a new digital predistorter to compensate serious distortions caused by memory high power amplifiers (HPAs) which exhibit output saturation characteristics. The proposed design is based on direct learning using a data-driven B-spline Wiener system modeling approach. The nonlinear HPA with memory is first identified based on the B-spline neural network model using the Gauss-Newton algorithm, which incorporates the efficient De Boor algorithm with both B-spline curve and first derivative recursions. The estimated Wiener HPA model is then used to design the Hammerstein predistorter. In particular, the inverse of the amplitude distortion of the HPA's static nonlinearity can be calculated effectively using the Newton-Raphson formula based on the inverse of De Boor algorithm. A major advantage of this approach is that both the Wiener HPA identification and the Hammerstein predistorter inverse can be achieved very efficiently and accurately. Simulation results obtained are presented to demonstrate the effectiveness of this novel digital predistorter design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evolutionary meta-algorithms for pulse shaping of broadband femtosecond duration laser pulses are proposed. The genetic algorithm searching the evolutionary landscape for desired pulse shapes consists of a population of waveforms (genes), each made from two concatenated vectors, specifying phases and magnitudes, respectively, over a range of frequencies. Frequency domain operators such as mutation, two-point crossover average crossover, polynomial phase mutation, creep and three-point smoothing as well as a time-domain crossover are combined to produce fitter offsprings at each iteration step. The algorithm applies roulette wheel selection; elitists and linear fitness scaling to the gene population. A differential evolution (DE) operator that provides a source of directed mutation and new wavelet operators are proposed. Using properly tuned parameters for DE, the meta-algorithm is used to solve a waveform matching problem. Tuning allows either a greedy directed search near the best known solution or a robust search across the entire parameter space.