84 resultados para Search-based technique
Resumo:
In civil applications, many researches on MIMO technique have achieved great progress. However, we consider military applications here. Differing from civil applications, military MIMO system may face many kinds of interferences, and the interference source may even not be equipped with multiple antennas. So the military MIMO system may receive some kind of strong interference coming from certain direction. Therefore, the military MIMO system must have capability to suppress directional interference. This paper presents a scheme to suppress directional interference for STBC MIMO system based on beam-forming. Simulation result shows that the scheme is valid to suppress directional strong interference for STBC MIMO system although with some performance loss compared with the ideal case of non-interference.
Resumo:
The combination of the synthetic minority oversampling technique (SMOTE) and the radial basis function (RBF) classifier is proposed to deal with classification for imbalanced two-class data. In order to enhance the significance of the small and specific region belonging to the positive class in the decision region, the SMOTE is applied to generate synthetic instances for the positive class to balance the training data set. Based on the over-sampled training data, the RBF classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier structure and the parameters of RBF kernels are determined using a particle swarm optimization algorithm based on the criterion of minimizing the leave-one-out misclassification rate. The experimental results on both simulated and real imbalanced data sets are presented to demonstrate the effectiveness of our proposed algorithm.
Resumo:
This contribution proposes a powerful technique for two-class imbalanced classification problems by combining the synthetic minority over-sampling technique (SMOTE) and the particle swarm optimisation (PSO) aided radial basis function (RBF) classifier. In order to enhance the significance of the small and specific region belonging to the positive class in the decision region, the SMOTE is applied to generate synthetic instances for the positive class to balance the training data set. Based on the over-sampled training data, the RBF classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier's structure and the parameters of RBF kernels are determined using a PSO algorithm based on the criterion of minimising the leave-one-out misclassification rate. The experimental results obtained on a simulated imbalanced data set and three real imbalanced data sets are presented to demonstrate the effectiveness of our proposed algorithm.
Resumo:
Despite the increasing use of groupware technologies in education, there is little evidence of their impact, especially within an enquiry-based learning (EBL) context. In this paper, we examine the use of a commercial standard Group Intelligence software called GroupSystems®ThinkTank. To date, ThinkTank has been adopted mainly in the USA and supports teams in generating ideas, categorising, prioritising, voting and multi-criteria decision-making and automatically generates a report at the end of each session. The software was used by students carrying out an EBL project, set by employers, for a full academic year. The criteria for assessing the impact of ThinkTank on student learning were those of creativity, participation, productivity, engagement and understanding. Data was collected throughout the year using a combination of interviews and questionnaires, and written feedback from employers. The overall findings show an increase in levels of productivity and creativity, evidence of a deeper understanding of their work but some variation in attitudes towards participation in the early stages of the project.
Resumo:
The aim of using GPS for Alzheimer's Patients is to give carers and families of those affected by Alzheimer's Disease, as well as all the other dementia related conditions, a service that can, via SMS text message, notify them should their loved one leave their home. Through a custom website, it enables the carer to remotely manage a contour boundary that is specifically assigned to the patient as well as the telephone numbers of the carers. The technique makes liberal use of such as Google Maps.
Resumo:
We present a new methodology that couples neutron diffraction experiments over a wide Q range with single chain modelling in order to explore, in a quantitative manner, the intrachain organization of non-crystalline polymers. The technique is based on the assignment of parameters describing the chemical, geometric and conformational characteristics of the polymeric chain, and on the variation of these parameters to minimize the difference between the predicted and experimental diffraction patterns. The method is successfully applied to the study of molten poly(tetrafluoroethylene) at two different temperatures, and provides unambiguous information on the configuration of the chain and its degree of flexibility. From analysis of the experimental data a model is derived with CC and CF bond lengths of 1.58 and 1.36 Å, respectively, a backbone valence angle of 110° and a torsional angle distribution which is characterized by four isometric states, namely a split trans state at ± 18°, giving rise to a helical chain conformation, and two gauche states at ± 112°. The probability of trans conformers is 0.86 at T = 350°C, which decreases slightly to 0.84 at T = 400°C. Correspondingly, the chain segments are characterized by long all-trans sequences with random changes in sign, rather anisotropic in nature, which give rise to a rather stiff chain. We compare the results of this quantitative analysis of the experimental scattering data with the theoretical predictions of both force fields and molecular orbital conformation energy calculations.
Resumo:
Active learning plays a strong role in mathematics and statistics, and formative problems are vital for developing key problem-solving skills. To keep students engaged and help them master the fundamentals before challenging themselves further, we have developed a system for delivering problems tailored to a student‟s current level of understanding. Specifically, by adapting simple methodology from clinical trials, a framework for delivering existing problems and other illustrative material has been developed, making use of macros in Excel. The problems are assigned a level of difficulty (a „dose‟), and problems are presented to the student in an order depending on their ability, i.e. based on their performance so far on other problems. We demonstrate and discuss the application of the approach with formative examples developed for a first year course on plane coordinate geometry, and also for problems centred on the topic of chi-square tests.
Resumo:
Flow along rivers, an integral part of many cities, might provide a key mechanism for ventilation – which is important for air quality and heat stress. Since the flow varies in space and time around rivers, there is limited utility in point measurements. Ground-based remote sensing offers the opportunity to study 3D flow in locations which are hard to observe. For three months in the winter and spring of 2011, the atmospheric flow above the River Thames in central London was observed using a scanning Doppler lidar, a dual-beam scintillometer and sonic anemometry. First, an inter-comparison showed that lidar-derived mean wind-speed estimates compare almost as well to sonic anemometers (root-mean-square error (rmse) 0.65–0.68 m s–1) as comparisons between sonic anemometers (0.35–0.73 m s–1). Second, the lidar duo-beam scanning strategy provided horizontal transects of wind vectors comparison with scintillometer rmse 1.12–1.63 m s–1) which revealed mean and turbulent flow across the river and surrounds; in particular: chanelling flow along the river and turbulence changes consistent with the roughness changes between built to river environments. The results have important consequences for air quality and dispersion around urban rivers, especially given that many cities have high traffic rates on bankside roads.
Resumo:
Subfossil pollen and plant macrofossil data derived from 14C-dated sediment profiles can provide quantitative information on glacial and interglacial climates. The data allow climate variables related to growing season warmth, winter cold, and plant-available moisture to be reconstructed. Continental-scale reconstructions have been made for the mid-Holocene (MH, around 6 ka) and Last Glacial Maximum (LGM, around 21 ka), allowing comparison with palaeoclimate simulations currently being carried out as part of the fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change. The synthesis of the available MH and LGM climate reconstructions and their uncertainties, obtained using modern analogue, regression and model-inversion techniques, is presented for four temperature variables and two moisture variables. Reconstructions of the same variables based on surface-pollen assemblages are shown to be accurate and unbiased. Reconstructed LGM and MH climate anomaly patterns are coherent, consistent between variables, and robust with respect to the choice of technique. They support a conceptual model of the controls of Late Quaternary climate change whereby the first-order effects of orbital variations and greenhouse forcing on the seasonal cycle of temperature are predictably modified by responses of the atmospheric circulation and surface energy balance.
Resumo:
We present a new technique for correcting errors in radar estimates of rainfall due to attenuation which is based on the fact that any attenuating target will itself emit, and that this emission can be detected by the increased noise level in the radar receiver. The technique is being installed on the UK operational network, and for the first time, allows radome attenuation to be monitored using the increased noise at the higher beam elevations. This attenuation has a large azimuthal dependence but for an old radome can be up to 4 dB for rainfall rates of just 2–4 mm/h. This effect has been neglected in the past, but may be responsible for significant errors in rainfall estimates and in radar calibrations using gauges. The extra noise at low radar elevations provides an estimate of the total path integrated attenuation of nearby storms; this total attenuation can then be used as a constraint for gate-by-gate or polarimetric correction algorithms.
Resumo:
We describe a one-port de-embedding technique suitable for the quasi-optical characterization of terahertz integrated components at frequencies beyond the operational range of most vector network analyzers. This technique is also suitable when the manufacturing of precision terminations to sufficiently fine tolerances for the application of a TRL de-embedding technique is not possible. The technique is based on vector reflection measurements of a series of easily realizable test pieces. A theoretical analysis is presented for the precision of the technique when implemented using a quasi-optical null-balanced bridge reflectometer. The analysis takes into account quantization effects in the linear and angular encoders associated with the balancing procedure, as well as source power and detector noise equivalent power. The precision in measuring waveguide characteristic impedance and attenuation using this de-embedding technique is further analyzed after taking into account changes in the power coupled due to axial, rotational, and lateral alignment errors between the device under test and the instruments' test port. The analysis is based on the propagation of errors after assuming imperfect coupling of two fundamental Gaussian beams. The required precision in repositioning the samples at the instruments' test-port is discussed. Quasi-optical measurements using the de-embedding process for a WR-8 adjustable precision short at 125 GHz are presented. The de-embedding methodology may be extended to allow the determination of S-parameters of arbitrary two-port junctions. The measurement technique proposed should prove most useful above 325 GHz where there is a lack of measurement standards.
Resumo:
A deeper understanding of random markers is important if they are to be employed for a range of objectives. The sequence specific amplified polymorphism (S-SAP) technique is a powerful genetic analysis tool which exploits the high copy number of retrotransposon long terminal repeats (LTRs) in the plant genome. The distribution and inheritance of S-SAP bands in the barley genome was studied using the Steptoe × Morex (S × M) double haploid (DH) population. Six S-SAP primer combinations generated 98 polymorphic bands, and map positions were assigned to all but one band. Eight putative co-dominant loci were detected, representing 16 of the mapped markers. Thus at least 81 of the mapped S-SAP loci were dominant. The markers were distributed along all of the seven chromosomes and a tendency to cluster was observed. The distribution of S-SAP markers over the barley genome concurred with the knowledge of the high copy number of retrotransposons in plants. This experiment has demonstrated the potential for the S-SAP technique to be applied in a range of analyses such as genetic fingerprinting, marker assisted breeding, biodiversity assessment and phylogenetic analyses.
Resumo:
With the fast development of the Internet, wireless communications and semiconductor devices, home networking has received significant attention. Consumer products can collect and transmit various types of data in the home environment. Typical consumer sensors are often equipped with tiny, irreplaceable batteries and it therefore of the utmost importance to design energy efficient algorithms to prolong the home network lifetime and reduce devices going to landfill. Sink mobility is an important technique to improve home network performance including energy consumption, lifetime and end-to-end delay. Also, it can largely mitigate the hot spots near the sink node. The selection of optimal moving trajectory for sink node(s) is an NP-hard problem jointly optimizing routing algorithms with the mobile sink moving strategy is a significant and challenging research issue. The influence of multiple static sink nodes on energy consumption under different scale networks is first studied and an Energy-efficient Multi-sink Clustering Algorithm (EMCA) is proposed and tested. Then, the influence of mobile sink velocity, position and number on network performance is studied and a Mobile-sink based Energy-efficient Clustering Algorithm (MECA) is proposed. Simulation results validate the performance of the proposed two algorithms which can be deployed in a consumer home network environment.
Resumo:
A detailed spectrally-resolved extraterrestrial solar spectrum (ESS) is important for line-by-line radiative transfer modeling in the near-infrared (near-IR). Very few observationally-based high-resolution ESS are available in this spectral region. Consequently the theoretically-calculated ESS by Kurucz has been widely adopted. We present the CAVIAR (Continuum Absorption at Visible and Infrared Wavelengths and its Atmospheric Relevance) ESS which is derived using the Langley technique applied to calibrated observations using a ground-based high-resolution Fourier transform spectrometer (FTS) in atmospheric windows from 2000–10000 cm-1 (1–5 μm). There is good agreement between the strengths and positions of solar lines between the CAVIAR and the satellite-based ACE-FTS (Atmospheric Chemistry Experiment-FTS) ESS, in the spectral region where they overlap, and good agreement with other ground-based FTS measurements in two near-IR windows. However there are significant differences in the structure between the CAVIAR ESS and spectra from semi-empirical models. In addition, we found a difference of up to 8 % in the absolute (and hence the wavelength-integrated) irradiance between the CAVIAR ESS and that of Thuillier et al., which was based on measurements from the Atmospheric Laboratory for Applications and Science satellite and other sources. In many spectral regions, this difference is significant, as the coverage factor k = 2 (or 95 % confidence limit) uncertainties in the two sets of observations do not overlap. Since the total solar irradiance is relatively well constrained, if the CAVIAR ESS is correct, then this would indicate an integrated “loss” of solar irradiance of about 30 W m-2 in the near-IR that would have to be compensated by an increase at other wavelengths.
Resumo:
Ensemble-based data assimilation is rapidly proving itself as a computationally-efficient and skilful assimilation method for numerical weather prediction, which can provide a viable alternative to more established variational assimilation techniques. However, a fundamental shortcoming of ensemble techniques is that the resulting analysis increments can only span a limited subspace of the state space, whose dimension is less than the ensemble size. This limits the amount of observational information that can effectively constrain the analysis. In this paper, a data selection strategy that aims to assimilate only the observational components that matter most and that can be used with both stochastic and deterministic ensemble filters is presented. This avoids unnecessary computations, reduces round-off errors and minimizes the risk of importing observation bias in the analysis. When an ensemble-based assimilation technique is used to assimilate high-density observations, the data-selection procedure allows the use of larger localization domains that may lead to a more balanced analysis. Results from the use of this data selection technique with a two-dimensional linear and a nonlinear advection model using both in situ and remote sounding observations are discussed.