967 resultados para structure based alignments
Resumo:
A very efficient learning algorithm for model subset selection is introduced based on a new composite cost function that simultaneously optimizes the model approximation ability and model adequacy. The derived model parameters are estimated via forward orthogonal least squares, but the subset selection cost function includes an A-optimality design criterion to minimize the variance of the parameter estimates that ensures the adequacy and parsimony of the final model. An illustrative example is included to demonstrate the effectiveness of the new approach.
Resumo:
A new parameter-estimation algorithm, which minimises the cross-validated prediction error for linear-in-the-parameter models, is proposed, based on stacked regression and an evolutionary algorithm. It is initially shown that cross-validation is very important for prediction in linear-in-the-parameter models using a criterion called the mean dispersion error (MDE). Stacked regression, which can be regarded as a sophisticated type of cross-validation, is then introduced based on an evolutionary algorithm, to produce a new parameter-estimation algorithm, which preserves the parsimony of a concise model structure that is determined using the forward orthogonal least-squares (OLS) algorithm. The PRESS prediction errors are used for cross-validation, and the sunspot and Canadian lynx time series are used to demonstrate the new algorithms.
Resumo:
A fast backward elimination algorithm is introduced based on a QR decomposition and Givens transformations to prune radial-basis-function networks. Nodes are sequentially removed using an increment of error variance criterion. The procedure is terminated by using a prediction risk criterion so as to obtain a model structure with good generalisation properties. The algorithm can be used to postprocess radial basis centres selected using a k-means routine and, in this mode, it provides a hybrid supervised centre selection approach.
Resumo:
This paper presents a new image data fusion scheme by combining median filtering with self-organizing feature map (SOFM) neural networks. The scheme consists of three steps: (1) pre-processing of the images, where weighted median filtering removes part of the noise components corrupting the image, (2) pixel clustering for each image using self-organizing feature map neural networks, and (3) fusion of the images obtained in Step (2), which suppresses the residual noise components and thus further improves the image quality. It proves that such a three-step combination offers an impressive effectiveness and performance improvement, which is confirmed by simulations involving three image sensors (each of which has a different noise structure).
Resumo:
Two complex heterometallic salts with formulae Tl-6[Fe(CN)(6)](1) (33)(NO3)(OH) (1) and [Co(bpy)(2)(CN)(2)](2){[Ag(CN)(2)](0) (5)[Fe(CN)(6)](0) (5)} 8H(2)O (2) have been synthesized and fully characterized Single crystal X-ray analyses reveal that compound 1 is comprised of discrete Tl+ cations and [Fe(CN)(6)](3-) anions together with OH- and NO3- anions Compound 2 contains [Co(bpy)(2)(CN)(2)](+) cations and {[Ag(CN)(2)][Fe(CN)(6)]}(-) anions together with eight molecules of water of crystallization Both structures form unprecedented three-dimensional supramolecular networks via non covalent interactions Another important observation is that the stereochemically active inert (lone) pair present on Tl+ plays little role in controlling the structure of 1 The water molecules in 2 play important roles in providing stability organizing a supramolecular network through hydrogen bonding In the syntheses of 1 and 2 Fe(II) is oxidized to Fe(III) and Co(II) to Co(III) respectively facilitating the formation of the salts that are obtained Both compounds exhibit photoluminescence emission in solution near the visible region.
Resumo:
The tap-length, or the number of the taps, is an important structural parameter of the linear MMSE adaptive filter. Although the optimum tap-length that balances performance and complexity varies with scenarios, most current adaptive filters fix the tap-length at some compromise value, making them inefficient to implement especially in time-varying scenarios. A novel gradient search based variable tap-length algorithm is proposed, using the concept of the pseudo-fractional tap-length, and it is shown that the new algorithm can converge to the optimum tap-length in the mean. Results of computer simulations are also provided to verify the analysis.
Resumo:
The vertical structure of the relationship between water vapor and precipitation is analyzed in 5 yr of radiosonde and precipitation gauge data from the Nauru Atmospheric Radiation Measurement (ARM) site. The first vertical principal component of specific humidity is very highly correlated with column water vapor (CWV) and has a maximum of both total and fractional variance captured in the lower free troposphere (around 800 hPa). Moisture profiles conditionally averaged on precipitation show a strong association between rainfall and moisture variability in the free troposphere and little boundary layer variability. A sharp pickup in precipitation occurs near a critical value of CWV, confirming satellite-based studies. A lag–lead analysis suggests it is unlikely that the increase in water vapor is just a result of the falling precipitation. To investigate mechanisms for the CWV–precipitation relationship, entraining plume buoyancy is examined in sonde data and simplified cases. For several different mixing schemes, higher CWV results in progressively greater plume buoyancies, particularly in the upper troposphere, indicating conditions favorable for deep convection. All other things being equal, higher values of lower-tropospheric humidity, via entrainment, play a major role in this buoyancy increase. A small but significant increase in subcloud layer moisture with increasing CWV also contributes to buoyancy. Entrainment coefficients inversely proportional to distance from the surface, associated with mass flux increase through a deep lower-tropospheric layer, appear promising. These yield a relatively even weighting through the lower troposphere for the contribution of environmental water vapor to midtropospheric buoyancy, explaining the association of CWV and buoyancy available for deep convection.
Resumo:
The combination of the synthetic minority oversampling technique (SMOTE) and the radial basis function (RBF) classifier is proposed to deal with classification for imbalanced two-class data. In order to enhance the significance of the small and specific region belonging to the positive class in the decision region, the SMOTE is applied to generate synthetic instances for the positive class to balance the training data set. Based on the over-sampled training data, the RBF classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier structure and the parameters of RBF kernels are determined using a particle swarm optimization algorithm based on the criterion of minimizing the leave-one-out misclassification rate. The experimental results on both simulated and real imbalanced data sets are presented to demonstrate the effectiveness of our proposed algorithm.
Resumo:
This contribution proposes a powerful technique for two-class imbalanced classification problems by combining the synthetic minority over-sampling technique (SMOTE) and the particle swarm optimisation (PSO) aided radial basis function (RBF) classifier. In order to enhance the significance of the small and specific region belonging to the positive class in the decision region, the SMOTE is applied to generate synthetic instances for the positive class to balance the training data set. Based on the over-sampled training data, the RBF classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier's structure and the parameters of RBF kernels are determined using a PSO algorithm based on the criterion of minimising the leave-one-out misclassification rate. The experimental results obtained on a simulated imbalanced data set and three real imbalanced data sets are presented to demonstrate the effectiveness of our proposed algorithm.
Resumo:
The IntFOLD-TS method was developed according to the guiding principle that the model quality assessment would be the most critical stage for our template based modelling pipeline. Thus, the IntFOLD-TS method firstly generates numerous alternative models, using in-house versions of several different sequence-structure alignment methods, which are then ranked in terms of global quality using our top performing quality assessment method – ModFOLDclust2. In addition to the predicted global quality scores, the predictions of local errors are also provided in the resulting coordinate files, using scores that represent the predicted deviation of each residue in the model from the equivalent residue in the native structure. The IntFOLD-TS method was found to generate high quality 3D models for many of the CASP9 targets, whilst also providing highly accurate predictions of their per-residue errors. This important information may help to make the 3D models that are produced by the IntFOLD-TS method more useful for guiding future experimental work
Resumo:
The structure and evolution of the Arctic stratospheric polar vortex is assessed during opposing phases of, primarily, the El Niño–Southern Oscillation (ENSO) and the Quasi-Biennial Oscillation (QBO), but the 11 year solar cycle and winters following large volcanic eruptions are also examined. The analysis is performed by taking 2-D moments of vortex potential vorticity (PV) fields which allow the area and centroid of the vortex to be calculated throughout the ERA-40 reanalysis data set (1958–2002). Composites of these diagnostics for the different phases of the natural forcings are then considered. Statistically significant results are found regarding the structure and evolution of the vortex during, in particular, the ENSO and QBO phases. When compared with the more traditional zonal mean zonal wind diagnostic at 60°N, the moment-based diagnostics are far more robust and contain more information regarding the state of the vortex. The study details, for the first time, a comprehensive sequence of events which map the evolution of the vortex during each of the forcings throughout an extended winter period.
Resumo:
The Eyjafjallajökull volcano in Iceland erupted explosively on 14 April 2010, emitting a plume of ash into the atmosphere. The ash was transported from Iceland toward Europe where mostly cloud-free skies allowed ground-based lidars at Chilbolton in England and Leipzig in Germany to estimate the mass concentration in the ash cloud as it passed overhead. The UK Met Office's Numerical Atmospheric-dispersion Modeling Environment (NAME) has been used to simulate the evolution of the ash cloud from the Eyjafjallajökull volcano during the initial phase of the ash emissions, 14–16 April 2010. NAME captures the timing and sloped structure of the ash layer observed over Leipzig, close to the central axis of the ash cloud. Relatively small errors in the ash cloud position, probably caused by the cumulative effect of errors in the driving meteorology en route, result in a timing error at distances far from the central axis of the ash cloud. Taking the timing error into account, NAME is able to capture the sloped ash layer over the UK. Comparison of the lidar observations and NAME simulations has allowed an estimation of the plume height time series to be made. It is necessary to include in the model input the large variations in plume height in order to accurately predict the ash cloud structure at long range. Quantitative comparison with the mass concentrations at Leipzig and Chilbolton suggest that around 3% of the total emitted mass is transported as far as these sites by small (<100 μm diameter) ash particles.
Resumo:
CloudSat is a satellite experiment designed to measure the vertical structure of clouds from space. The expected launch of CloudSat is planned for 2004, and once launched, CloudSat will orbit in formation as part of a constellation of satellites (the A-Train) that includes NASA's Aqua and Aura satellites, a NASA-CNES lidar satellite (CALIPSO), and a CNES satellite carrying a polarimeter (PARASOL). A unique feature that CloudSat brings to this constellation is the ability to fly a precise orbit enabling the fields of view of the CloudSat radar to be overlapped with the CALIPSO lidar footprint and the other measurements of the constellation. The precision and near simultaneity of this overlap creates a unique multisatellite observing system for studying the atmospheric processes essential to the hydrological cycle.The vertical profiles of cloud properties provided by CloudSat on the global scale fill a critical gap in the investigation of feedback mechanisms linking clouds to climate. Measuring these profiles requires a combination of active and passive instruments, and this will be achieved by combining the radar data of CloudSat with data from other active and passive sensors of the constellation. This paper describes the underpinning science and general overview of the mission, provides some idea of the expected products and anticipated application of these products, and the potential capability of the A-Train for cloud observations. Notably, the CloudSat mission is expected to stimulate new areas of research on clouds. The mission also provides an important opportunity to demonstrate active sensor technology for future scientific and tactical applications. The CloudSat mission is a partnership between NASA's JPL, the Canadian Space Agency, Colorado State University, the U.S. Air Force, and the U.S. Department of Energy.
Resumo:
A particulate microemulsion is generated in a simple two-component system comprising an amphiphilic copolymer (Pluronic P123) in mixtures with tannic acid. This is correlated to complexation between the poly(ethylene oxide) in the Pluronic copolymer and the multiple hydrogen bonding units in tannic acid which leads to the breakup of the ordered structure formed in gels of Pluronic copolymers, and the formation of dispersed nanospheres containing a bicontinuous internal structure. These novel nanoparticles termed ‘‘emulsomes’’ are self-stabilized by a coating layer of Pluronic copolymer. The microemulsion exhibits a pearlescent appearance due to selective light scattering from the emulsion droplets. This simple formulation based on a commercial copolymer and a biofunctional and biodegradable additive is expected to find applications in the fast moving consumer goods sector.
Resumo:
Several methods for assessing the sustainability of agricultural systems have been developed. These methods do not fully: (i) take into account the multi‐functionality of agriculture; (ii) include multidimensionality; (iii) utilize and implement the assessment knowledge; and (iv) identify conflicting goals and trade‐offs. This paper reviews seven recently developed multidisciplinary indicator‐based assessment methods with respect to their contribution to these shortcomings. All approaches include (1) normative aspects such as goal setting, (2) systemic aspects such as a specification of scale of analysis, (3) a reproducible structure of the approach. The approaches can be categorized into three typologies. The top‐down farm assessments focus on field or farm assessment. They have a clear procedure for measuring the indicators and assessing the sustainability of the system, which allows for benchmarking across farms. The degree of participation is low, potentially affecting the implementation of the results negatively. The top‐down regional assessment assesses the on‐farm and the regional effects. They include some participation to increase acceptance of the results. However, they miss the analysis of potential trade‐offs. The bottom‐up, integrated participatory or transdisciplinary approaches focus on a regional scale. Stakeholders are included throughout the whole process assuring the acceptance of the results and increasing the probability of implementation of developed measures. As they include the interaction between the indicators in their system representation, they allow for performing a trade‐off analysis. The bottom‐up, integrated participatory or transdisciplinary approaches seem to better overcome the four shortcomings mentioned above.