34 resultados para Flail space model
Resumo:
Abundant evidence for the occurrence of modulated envelope plasma wave packets is provided by recent satellite missions. These excitations are characterized by a slowly varying localized envelope structure, embedding the fast carrier wave, which appears to be the result of strong modulation of the wave amplitude. This modulation may be due to parametric interactions between different modes or, simply, to the nonlinear (self-)interaction of the carrier wave. A generic exact theory is presented in this study, for the nonlinear self-modulation of known electrostatic plasma modes, by employing a collisionless fluid model. Both cold (zero-temperature) and warm fluid descriptions are discussed and the results are compared. The (moderately) nonlinear oscillation regime is investigated by applying a multiple scale technique. The calculation leads to a Nonlinear Schrodinger-type Equation (NLSE), which describes the evolution of the slowly varying wave amplitude in time and space. The NLSE admits localized envelope (solitary wave) solutions of bright(pulses) or dark- (holes, voids) type, whose characteristics (maximum amplitude, width) depend on intrinsic plasma parameters. Effects like amplitude perturbation obliqueness (with respect to the propagation direction), finite temperature and defect (dust) concentration are explicitly considered. Relevance with similar highly localized modulated wave structures observed during recent satellite missions is discussed.
Resumo:
Universities aim for good “Space Management” so as to use the teaching space efficiently. Part of this task is to assign rooms and time-slots to teaching activities with limited numbers and capacities of lecture theaters, seminar rooms, etc. It is also common that some teaching activities require splitting into multiple events. For example, lectures can be too large to fit in one room or good teaching practice requires that seminars/tutorials are taught in small groups. Then, space management involves decisions on splitting as well as the assignments to rooms and time-slots. These decisions must be made whilst satisfying the pedagogic requirements of the institution and constraints on space resources. The efficiency of such management can be measured by the “utilisation”: the percentage of available seat-hours actually used. In many institutions, the observed utilisation is unacceptably low, and this provides our underlying motivation: to study the factors that affect teaching space utilisation, with the goal of improving it. We give a brief introduction to our work in this area, and then introduce a specific model for splitting. We present experimental results that show threshold phenomena and associated easy-hard-easy patterns of computational difficulty. We discuss why such behaviour is of importance for space management.
Resumo:
The motivation for this paper is to present an approach for rating the quality of the parameters in a computer-aided design model for use as optimization variables. Parametric Effectiveness is computed as the ratio of change in performance achieved by perturbing the parameters in the optimum way, to the change in performance that would be achieved by allowing the boundary of the model to move without the constraint on shape change enforced by the CAD parameterization. The approach is applied in this paper to optimization based on adjoint shape sensitivity analyses. The derivation of parametric effectiveness is presented for optimization both with and without the constraint of constant volume. In both cases, the movement of the boundary is normalized with respect to a small root mean squared movement of the boundary. The approach can be used to select an initial search direction in parameter space, or to select sets of model parameters which have the greatest ability to improve model performance. The approach is applied to a number of example 2D and 3D FEA and CFD problems.
Resumo:
Here a self-consistent one-dimensional continuum model is presented for a narrow gap plane-parallel dc glow discharge. The governing equations consist of continuity and momentum equations for positive and negative ions and electrons coupled with Poisson's equation. A singular perturbation method is developed for the analysis of high pressure dc glow discharge. The kinetic processes of the ionization, electron attachment, and ion-ion recombination are included in the model. Explicit results are obtained for the asymptotic limits: delta=(r(D)/L)(2)--> 0, omega=(r(S)/L)(2)--> 0, where r(D) is the Debye radius, r(S) is recombination length, and L is the gap length. The discharge gap divides naturally into four layers with multiple space scales: anode fall region, positive column, transitional region, cathode fall region and diffusion layer adjacent to the cathode surface, its formation is discussed. The effects of the gas pressure, gap spacing and dc voltage on the electrical properties of the layers and its dimension are investigated. (C) 2000 American Institute of Physics. [S0021-8979(00)00813-6].
Resumo:
This paper proposes a two-level 3D human pose tracking method for a specific action captured by several cameras. The generation of pose estimates relies on fitting a 3D articulated model on a Visual Hull generated from the input images. First, an initial pose estimate is constrained by a low dimensional manifold learnt by Temporal Laplacian Eigenmaps. Then, an improved global pose is calculated by refining individual limb poses. The validation of our method uses a public standard dataset and demonstrates its accurate and computational efficiency. © 2011 IEEE.
Resumo:
We propose the inverse Gaussian distribution, as a less complex alternative to the classical log-normal model, to describe turbulence-induced fading in free-space optical (FSO) systems operating in weak turbulence conditions and/or in the presence of aperture averaging effects. By conducting goodness of fit tests, we define the range of values of the scintillation index for various multiple-input multiple-output (MIMO) FSO configurations, where the two distributions approximate each other with a certain significance level. Furthermore, the bit error rate performance of two typical MIMO FSO systems is investigated over the new turbulence model; an intensity-modulation/direct detection MIMO FSO system with Q-ary pulse position modulation that employs repetition coding at the transmitter and equal gain combining at the receiver, and a heterodyne MIMO FSO system with differential phase-shift keying and maximal ratio combining at the receiver. Finally, numerical results are presented that validate the theoretical analysis and provide useful insights into the implications of the model parameters on the overall system performance. © 2011 IEEE.
Resumo:
Prediction of biotic responses to future climate change in tropical Africa tends to be based on two modelling approaches: bioclimatic species envelope models and dynamic vegetation models. Another complementary but underused approach is to examine biotic responses to similar climatic changes in the past as evidenced in fossil and historical records. This paper reviews these records and highlights the information that they provide in terms of understanding the local- and regional-scale responses of African vegetation to future climate change. A key point that emerges is that a move to warmer and wetter conditions in the past resulted in a large increase in biomass and a range distribution of woody plants up to 400–500 km north of its present location, the so-called greening of the Sahara. By contrast, a transition to warmer and drier conditions resulted in a reduction in woody vegetation in many regions and an increase in grass/savanna-dominated landscapes. The rapid rate of climate warming coming into the current interglacial resulted in a dramatic increase in community turnover, but there is little evidence for widespread extinctions. However, huge variation in biotic response in both space and time is apparent with, in some cases, totally different responses to the same climatic driver. This highlights the importance of local features such as soils, topography and also internal biotic factors in determining responses and resilience of the African biota to climate change, information that is difficult to obtain from modelling but is abundant in palaeoecological records.
Resumo:
1. Quantitative reconstruction of past vegetation distribution and abundance from sedimentary pollen records provides an important baseline for understanding long term ecosystem dynamics and for the calibration of earth system process models such as regional-scale climate models, widely used to predict future environmental change. Most current approaches assume that the amount of pollen produced by each vegetation type, usually expressed as a relative pollen productivity term, is constant in space and time.
2. Estimates of relative pollen productivity can be extracted from extended R-value analysis (Parsons and Prentice, 1981) using comparisons between pollen assemblages deposited into sedimentary contexts, such as moss polsters, and measurements of the present day vegetation cover around the sampled location. Vegetation survey method has been shown to have a profound effect on estimates of model parameters (Bunting and Hjelle, 2010), therefore a standard method is an essential pre-requisite for testing some of the key assumptions of pollen-based reconstruction of past vegetation; such as the assumption that relative pollen productivity is effectively constant in space and time within a region or biome.
3. This paper systematically reviews the assumptions and methodology underlying current models of pollen dispersal and deposition, and thereby identifies the key characteristics of an effective vegetation survey method for estimating relative pollen productivity in a range of landscape contexts.
4. It then presents the methodology used in a current research project, developed during a practitioner workshop. The method selected is pragmatic, designed to be replicable by different research groups, usable in a wide range of habitats, and requiring minimum effort to collect adequate data for model calibration rather than representing some ideal or required approach. Using this common methodology will allow project members to collect multiple measurements of relative pollen productivity for major plant taxa from several northern European locations in order to test the assumption of uniformity of these values within the climatic range of the main taxa recorded in pollen records from the region.
Resumo:
Context. Comet 67P/Churyumov-Gerasimenko is the target of the European Space Agency Rosetta spacecraft rendez-vous mission. Detailed physical characteristation of the comet before arrival is important for mission planning as well as providing a test bed for ground-based observing and data-analysis methods. Aims: To conduct a long-term observational programme to characterize the physical properties of the nucleus of the comet, via ground-based optical photometry, and to combine our new data with all available nucleus data from the literature. Methods: We applied aperture photometry techniques on our imaging data and combined the extracted rotational lightcurves with data from the literature. Optical lightcurve inversion techniques were applied to constrain the spin state of the nucleus and its broad shape. We performed a detailed surface thermal analysis with the shape model and optical photometry by incorporating both into the new Advanced Thermophysical Model (ATPM), along with all available Spitzer 8-24 μm thermal-IR flux measurements from the literature. Results: A convex triangular-facet shape model was determined with axial ratios b/a = 1.239 and c/a = 0.819. These values can vary by as much as 7% in each axis and still result in a statistically significant fit to the observational data. Our best spin state solution has Psid = 12.76137 ± 0.00006 h, and a rotational pole orientated at Ecliptic coordinates λ = 78°(±10°), β = + 58°(±10°). The nucleus phase darkening behaviour was measured and best characterized using the IAU HG system. Best fit parameters are: G = 0.11 ± 0.12 and HR(1,1,0) = 15.31 ± 0.07. Our shape model combined with the ATPM can satisfactorily reconcile all optical and thermal-IR data, with the fit to the Spitzer 24 μm data taken in February 2004 being exceptionally good. We derive a range of mutually-consistent physical parameters for each thermal-IR data set, including effective radius, geometric albedo, surface thermal inertia and roughness fraction. Conclusions: The overall nucleus dimensions are well constrained and strongly imply a broad nucleus shape more akin to comet 9P/Tempel 1, rather than the highly elongated or "bi-lobed" nuclei seen for comets 103P/Hartley 2 or 8P/Tuttle. The derived low thermal inertia of
Resumo:
Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bitwidth and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on OpenCL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. OpenCL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified OpenCL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3x faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.
Resumo:
Reduced Order Models (ROMs) have proven to be a valid and efficient approach to model the thermal behaviour of building zones. The main issues associated with the use of zonal/lumped models are how to (1) divide the domain (lumps) and (2) evaluate the pa- rameters which characterise the lump-to-lump exchange of energy and momentum. The object of this research is to develop a methodology for the generation of ROMs from CFD models. The lumps of the ROM and their average property values are automatically ex- tracted from the CFD models through user defined constraints. This methodology has been applied to validated CFD models of a zone of the Environmental Research Insti- tute (ERI) Building in University College Cork (UCC). The ROM predicts temperature distribution in the domain with an average error lower than 2%. It is computationally efficient with an execution time of 3.45 seconds. Future steps in this research will be the development of the procedure to automatically extract the parameters which define lump-to-lump energy and momentum exchange. At the moment these parameters are evaluated through the minimisation of a cost function. The ROMs will also be utilised to predict the transient thermal behaviour of the building zone.
Resumo:
Vector space models (VSMs) represent word meanings as points in a high dimensional space. VSMs are typically created using a large text corpora, and so represent word semantics as observed in text. We present a new algorithm (JNNSE) that can incorporate a measure of semantics not previously used to create VSMs: brain activation data recorded while people read words. The resulting model takes advantage of the complementary strengths and weaknesses of corpus and brain activation data to give a more complete representation of semantics. Evaluations show that the model 1) matches a behavioral measure of semantics more closely, 2) can be used to predict corpus data for unseen words and 3) has predictive power that generalizes across brain imaging technologies and across subjects. We believe that the model is thus a more faithful representation of mental vocabularies.