973 resultados para reciprocal space mapping
Resumo:
Over the last few decades, there has been a significant land cover (LC) change across the globe due to the increasing demand of the burgeoning population and urban sprawl. In order to take account of the change, there is a need for accurate and up-to-date LC maps. Mapping and monitoring of LC in India is being carried out at national level using multi-temporal IRS AWiFS data. Multispectral data such as IKONOS, Landsat-TM/ETM+, IRS-ICID LISS-III/IV, AWiFS and SPOT-5, etc. have adequate spatial resolution (similar to 1m to 56m) for LC mapping to generate 1:50,000 maps. However, for developing countries and those with large geographical extent, seasonal LC mapping is prohibitive with data from commercial sensors of limited spatial coverage. Superspectral data from the MODIS sensor are freely available, have better temporal (8 day composites) and spectral information. MODIS pixels typically contain a mixture of various LC types (due to coarse spatial resolution of 250, 500 and 1000 in), especially in more fragmented landscapes. In this context, linear spectral unmixing would be useful for mapping patchy land covers, such as those that characterise much of the Indian subcontinent. This work evaluates the existing unmixing technique for LC mapping using MODIS data, using end-members that are extracted through Pixel Purity Index (PPI), Scatter plot and N-dimensional visualisation. The abundance maps were generated for agriculture, built up, forest, plantations, waste land/others and water bodies. The assessment of the results using ground truth and a LISS-III classified map shows 86% overall accuracy, suggesting the potential for broad-scale applicability of the technique with superspectral data for natural resource planning and inventory applications. Index Terms-Remote sensing, digital
Resumo:
Governments and intergovernmental organisations have long recognised that space communities – the ultimate ‘settlements at the edge’ – will exist one day and have based their first plans for these on another region ‘at the edge’, the Antarctic. United States President Eisenhower proposed to the United Nations in 1960 that the principles of the Antarctic Treaty be applied to outer space and celestial bodies (State Department, n.d.). Three years later the UN adopted the Declaration of Legal Principles Governing the Activities of States in the Exploration and Use of Outer Space and in 1967 that became the Outer Space Treaty. According to the UN Office for Outer Space Affairs, ‘the Treaty was opened for signature by the three depository Governments (the Russian Federation, the United Kingdom and the United States of America) in January 1967, and it entered into force in October 1967’ (Office for Outer Space Affairs, n.d). The status of the treaty (at time of writing) was 89 signatories and 102 parties (Office for Disarmament Affairs, n.d.). Other related instruments include the Rescue Agreement, the Liability Convention, the Registration Convention and the Moon Agreement (Office for Outer Space Affairs, n.d.-a). Jumping to the present, a newsagency reported in July 2014 (Reuters, 2014) that the British Government had shortlisted eight aerodromes in its search for a potential base for the UK’s first spaceplane flights which Ministers want to happen by 2018 (UK Space Agency, 2014). The United States already has a spaceport, in New Mexico (Cokley, Rankin, Heinrich, & McAuliffe, 2013)...
Resumo:
This work grew out of an attempt to understand a conjectural remark made by Professor Kyoji Saito to the author about a possible link between the Fox-calculus description of the symplectic structure on the moduli space of representations of the fundamental group of surfaces into a Lie group and pairs of mutually dual sets of generators of the fundamental group. In fact in his paper [3] , Prof. Kyoji Saito gives an explicit description of the system of dual generators of the fundamental group.
Resumo:
For point to point multiple input multiple output systems, Dayal-Brehler-Varanasi have proved that training codes achieve the same diversity order as that of the underlying coherent space time block code (STBC) if a simple minimum mean squared error estimate of the channel formed using the training part is employed for coherent detection of the underlying STBC. In this letter, a similar strategy involving a combination of training, channel estimation and detection in conjunction with existing coherent distributed STBCs is proposed for noncoherent communication in Amplify-and-Forward (AF) relay networks. Simulation results show that the proposed simple strategy outperforms distributed differential space-time coding for AF relay networks. Finally, the proposed strategy is extended to asynchronous relay networks using orthogonal frequency division multiplexing.
Resumo:
We design rapidly folding sequences by assigning the strongest couplings to the contacts present in a target native state in a two dimensional model of heteropolymers. The pathways to folding and their dependence on the temperature are illustrated via a mapping of the dynamics into motion within the space of the maximally compact cells.
Resumo:
A single-step solid-phase RIA (SS-SPRIA) developed in our laboratory using hybridoma culture supernatants has been utilised for the quantitation of epitope-paratope interactions. Using SS-SPRIA as a quantitative tool for the assessment of epitope stability, it was found that several assembled epitopes of human chorionic gonadotropin (hCG) are differentially stable to proteolysis and chemical modification. Based on these observations an approach has been developed for identifying the amino acid residues constituting an epitopic region. This approach has now been used to map an assembled epitope at/near the receptor binding region of the hormone. The mapped site forms a part of the seat belt region and the cystine knot region (C34-C38-C88-C90-H106). The carboxy terminal region of the alpha-subunit forms a part of the epitope indicating its proximity to the receptor binding region. These results are in agreement with the reported receptor binding region identified through other approaches and the X-ray crystal structure of hCG.
Resumo:
The Taylor coefficients c and d of the EM form factor of the pion are constrained using analyticity, knowledge of the phase of the form factor in the time-like region, 4m(pi)(2) <= t <= t(in) and its value at one space-like point, using as input the (g - 2) of the muon. This is achieved using the technique of Lagrange multipliers, which gives a transparent expression for the corresponding bounds. We present a detailed study of the sensitivity of the bounds to the choice of time-like phase and errors present in the space-like data, taken from recent experiments. We find that our results constrain c stringently. We compare our results with those in the literature and find agreement with the chiral perturbation-theory results for c. We obtain d similar to O(10) GeV-6 when c is set to the chiral perturbation-theory values.
Resumo:
A defect-selective photothermal imaging system for the diagnostics of optical coatings is demonstrated. The instrument has been optimized for pump and probe parameters, detector performance, and signal processing algorithm. The imager is capable of mapping purely optical or thermal defects efficiently in coatings of low damage threshold and low absorbance. Detailed mapping of minor inhomogeneities at low pump power has been achieved through the simultaneous action of a low-noise fiber optic photothermal beam defection sensor and a common-mode-rejection demodulation (CMRD) technique. The linearity and sensitivity of the sensor have been examined theoretically and experimentally, and the signal to noise ratio improvement factor is found to be about 110 compared to a conventional bicell photodiode. The scanner is so designed that mapping of static or shock sensitive samples is possible. In the case of a sample with absolute absorptance of 3.8 x 10(-4), a change in absorptance of about 0.005 x 10(-4) has been detected without ambiguity, ensuring a contrast parameter of 760. This is about 1085% improvement over the conventional approach containing a bicell photodiode, at the same pump power. The merits of the system have been demonstrated by mapping two intentionally created damage sites in a MgF2 coating on fused silica at different excitation powers. Amplitude and phase maps were recorded for thermally thin and thick cases, and the results are compared to demonstrate a case which, in conventional imaging, would lead to a deceptive conclusion regarding the type and location of the damage. Also, a residual damage profile created by long term irradiation with high pump power density has been depicted.
Resumo:
Recent research suggests that aggressive driving may be influenced by driver perceptions of their interactions with other drivers in terms of ‘right’ or ‘wrong’ behaviour. Drivers appear to take a moral standpoint on ‘right’ or ‘wrong’ driving behaviour. However, ‘right’ or ‘wrong’ in the context of road use is not defined solely by legislation, but includes informal rules that are sometimes termed ‘driving etiquette’. Driving etiquette has implications for road safety and public safety since breaches of both formal and informal rules may result in moral judgement of others and subsequent behaviours designed to punish the ‘offender’ or ‘teach them a lesson’. This paper outlines qualitative research that was undertaken with drivers to explore their understanding of driving etiquette and how they reacted to other drivers’ observance or violation of their understanding. The aim was to develop an explanatory framework within which the relationships between driving etiquette and aggressive driving could be understood, specifically moral judgement of other drivers and punishment of their transgression of driving etiquette. Thematic analysis of focus groups (n=10) generated three main themes: (1) courtesy and reciprocity, and the notion of two-way responsibility, with examples of how expectations of courteous behaviour vary according to the traffic interaction; (2) acknowledgement and shared social experience: ‘giving the wave’; and (3) responses to breaches of the expectations/informal rules. The themes are discussed in terms of their roles in an explanatory framework of the informal rules of etiquette and how interactions between drivers can reinforce or weaken a driver’s understanding of driver etiquette and potentially lead to driving aggression.
Resumo:
Remote sensing provides a lucid and effective means for crop coverage identification. Crop coverage identification is a very important technique, as it provides vital information on the type and extent of crop cultivated in a particular area. This information has immense potential in the planning for further cultivation activities and for optimal usage of the available fertile land. As the frontiers of space technology advance, the knowledge derived from the satellite data has also grown in sophistication. Further, image classification forms the core of the solution to the crop coverage identification problem. No single classifier can prove to satisfactorily classify all the basic crop cover mapping problems of a cultivated region. We present in this paper the experimental results of multiple classification techniques for the problem of crop cover mapping of a cultivated region. A detailed comparison of the algorithms inspired by social behaviour of insects and conventional statistical method for crop classification is presented in this paper. These include the Maximum Likelihood Classifier (MLC), Particle Swarm Optimisation (PSO) and Ant Colony Optimisation (ACO) techniques. The high resolution satellite image has been used for the experiments.
Resumo:
Rapid and unplanned growth of Kathmandu Valley towns over the past decades has resulted in the haphazard development of new neighbourhoods with significant consequences on their public space. This paper examines the development of public space in the valley’s new neighbourhoods in the context of the current urban growth. A case study approach of three new neighbourhoods was developed to examine the provision of public space with data collected from site observations, interviews with neighbourhood residents and other secondary sources. The cases studies consist of both planned and unplanned new neighbourhoods. Findings reveal a severe loss of public space in the unplanned new neighbourhoods. In planned new neighbourhoods, the provision of public space remains poor in terms of physical features, and thus, does not support community activities and needs. Several factors, which are an outcome of the lack of proper urban growth initiatives and control measures, such as an overall drawback in the formation of new neighbourhoods, the poor capacity of local community-based organisations and the encroachment of public land are responsible for the present development of neighbourhood public space. The problems with ongoing management of public spaces are a significant issue in both unplanned and planned new neighbourhoods.
Resumo:
Neural data are inevitably contaminated by noise. When such noisy data are subjected to statistical analysis, misleading conclusions can be reached. Here we attempt to address this problem by applying a state-space smoothing method, based on the combined use of the Kalman filter theory and the Expectation–Maximization algorithm, to denoise two datasets of local field potentials recorded from monkeys performing a visuomotor task. For the first dataset, it was found that the analysis of the high gamma band (60–90 Hz) neural activity in the prefrontal cortex is highly susceptible to the effect of noise, and denoising leads to markedly improved results that were physiologically interpretable. For the second dataset, Granger causality between primary motor and primary somatosensory cortices was not consistent across two monkeys and the effect of noise was suspected. After denoising, the discrepancy between the two subjects was significantly reduced.
Resumo:
Distributed space time coding for wireless relay networks when the source, the destination and the relays have multiple antennas have been studied by Jing and Hassibi. In this set-up, the transmit and the receive signals at different antennas of the same relay are processed and designed independently, even though the antennas are colocated. In this paper, a wireless relay network with single antenna at the source and the destination and two antennas at each of the R relays is considered. A new class of distributed space time block codes called Co-ordinate Interleaved Distributed Space-Time Codes (CIDSTC) are introduced where, in the first phase, the source transmits a T-length complex vector to all the relays;and in the second phase, at each relay, the in-phase and quadrature component vectors of the received complex vectors at the two antennas are interleaved and processed before forwarding them to the destination. Compared to the scheme proposed by Jing-Hassibi, for T >= 4R, while providing the same asymptotic diversity order of 2R, CIDSTC scheme is shown to provide asymptotic coding gain with the cost of negligible increase in the processing complexity at the relays. However, for moderate and large values of P, CIDSTC scheme is shown to provide more diversity than that of the scheme proposed by Jing-Hassibi. CIDSTCs are shown to be fully diverse provided the information symbols take value from an appropriate multidimensional signal set.
Resumo:
A forest of quadtrees is a refinement of a quadtree data structure that is used to represent planar regions. A forest of quadtrees provides space savings over regular quadtrees by concentrating vital information. The paper presents some of the properties of a forest of quadtrees and studies the storage requirements for the case in which a single 2m × 2m region is equally likely to occur in any position within a 2n × 2n image. Space and time efficiency are investigated for the forest-of-quadtrees representation as compared with the quadtree representation for various cases.
Resumo:
Fluid bed granulation is a key pharmaceutical process which improves many of the powder properties for tablet compression. Dry mixing, wetting and drying phases are included in the fluid bed granulation process. Granules of high quality can be obtained by understanding and controlling the critical process parameters by timely measurements. Physical process measurements and particle size data of a fluid bed granulator that are analysed in an integrated manner are included in process analytical technologies (PAT). Recent regulatory guidelines strongly encourage the pharmaceutical industry to apply scientific and risk management approaches to the development of a product and its manufacturing process. The aim of this study was to utilise PAT tools to increase the process understanding of fluid bed granulation and drying. Inlet air humidity levels and granulation liquid feed affect powder moisture during fluid bed granulation. Moisture influences on many process, granule and tablet qualities. The approach in this thesis was to identify sources of variation that are mainly related to moisture. The aim was to determine correlations and relationships, and utilise the PAT and design space concepts for the fluid bed granulation and drying. Monitoring the material behaviour in a fluidised bed has traditionally relied on the observational ability and experience of an operator. There has been a lack of good criteria for characterising material behaviour during spraying and drying phases, even though the entire performance of a process and end product quality are dependent on it. The granules were produced in an instrumented bench-scale Glatt WSG5 fluid bed granulator. The effect of inlet air humidity and granulation liquid feed on the temperature measurements at different locations of a fluid bed granulator system were determined. This revealed dynamic changes in the measurements and enabled finding the most optimal sites for process control. The moisture originating from the granulation liquid and inlet air affected the temperature of the mass and pressure difference over granules. Moreover, the effects of inlet air humidity and granulation liquid feed rate on granule size were evaluated and compensatory techniques used to optimize particle size. Various end-point indication techniques of drying were compared. The ∆T method, which is based on thermodynamic principles, eliminated the effects of humidity variations and resulted in the most precise estimation of the drying end-point. The influence of fluidisation behaviour on drying end-point detection was determined. The feasibility of the ∆T method and thus the similarities of end-point moisture contents were found to be dependent on the variation in fluidisation between manufacturing batches. A novel parameter that describes behaviour of material in a fluid bed was developed. Flow rate of the process air and turbine fan speed were used to calculate this parameter and it was compared to the fluidisation behaviour and the particle size results. The design space process trajectories for smooth fluidisation based on the fluidisation parameters were determined. With this design space it is possible to avoid excessive fluidisation and improper fluidisation and bed collapse. Furthermore, various process phenomena and failure modes were observed with the in-line particle size analyser. Both rapid increase and a decrease in granule size could be monitored in a timely manner. The fluidisation parameter and the pressure difference over filters were also discovered to express particle size when the granules had been formed. The various physical parameters evaluated in this thesis give valuable information of fluid bed process performance and increase the process understanding.