85 resultados para physically based modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A PMU based WAMS is to be placed on a weakly coupled section of distribution grid, with high levels of distributed generation. In anticipation of PMU data a Siemens PSS/E model of the electrical environment has been used to return similar data to that expected from the WAMS. This data is then used to create a metric that reflects optimization, control and protection in the region. System states are iterated through with the most desirable one returning the lowest optimization metric, this state is assessed against the one returned by PSS/E under normal circumstances. This paper investigates the circumstances that trigger SPS in the region, through varying generation between 0 and 110% and compromising the network through line loss under summer minimum and winter maximum conditions. It is found that the optimized state can generally tolerate an additional 2 MW of generation (3% of total) before encroaching the same thresholds and in one instance moves the triggering to 100% of generation output.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents generalized Laplacian eigenmaps, a novel dimensionality reduction approach designed to address stylistic variations in time series. It generates compact and coherent continuous spaces whose geometry is data-driven. This paper also introduces graph-based particle filter, a novel methodology conceived for efficient tracking in low dimensional space derived from a spectral dimensionality reduction method. Its strengths are a propagation scheme, which facilitates the prediction in time and style, and a noise model coherent with the manifold, which prevents divergence, and increases robustness. Experiments show that a combination of both techniques achieves state-of-the-art performance for human pose tracking in underconstrained scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In finite difference time domain simulation of room acoustics, source functions are subject to various constraints. These depend on the way sources are injected into the grid and on the chosen parameters of the numerical scheme being used. This paper addresses the issue of selecting and designing sources for finite difference simulation, by first reviewing associated aims and constraints, and evaluating existing source models against these criteria. The process of exciting a model is generalized by introducing a system of three cascaded filters, respectively, characterizing the driving pulse, the source mechanics, and the injection of the resulting source function into the grid. It is shown that hard, soft, and transparent sources can be seen as special cases within this unified approach. Starting from the mechanics of a small pulsating sphere, a parametric source model is formulated by specifying suitable filters. This physically constrained source model is numerically consistent, does not scatter incoming waves, and is free from zero- and low-frequency artifacts. Simulation results are employed for comparison with existing source formulations in terms of meeting the spectral and temporal requirements on the outward propagating wave.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The air-sea exchange of two legacy persistent organic pollutants (POPs), γ-HCH and PCB 153, in the North Sea, is presented and discussed using results of regional fate and transport and shelf-sea hydrodynamic ocean models for the period 1996–2005. Air-sea exchange occurs through gas exchange (deposition and volatilization), wet deposition and dry deposition. Atmospheric concentrations are interpolated into the model domain from results of the EMEP MSC-East multi-compartmental model (Gusev et al, 2009). The North Sea is net depositional for γ-HCH, and is dominated by gas deposition with notable seasonal variability and a downward trend over the 10 year period. Volatilization rates of γ-HCH are generally a factor of 2–3 less than gas deposition in winter, spring and summer but greater in autumn when the North Sea is net volatilizational. A downward trend in fugacity ratios is found, since gas deposition is decreasing faster than volatilization. The North Sea is net volatilizational for PCB 153, with highest rates of volatilization to deposition found in the areas surrounding polluted British and continental river sources. Large quantities of PCB 153 entering through rivers lead to very high local rates of volatilization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SN 2004et is one of the nearest and best-observed Type IIP supernovae, with a progenitor detection as well as good photometric and spectroscopic observational coverage well into the nebular phase. Based on nucleosynthesis from stellar evolution/explosion models we apply spectral modeling to analyze its 140-700 day evolution from ultraviolet to mid-infrared. We find a M_ZAMS= 15 Msun progenitor star (with an oxygen mass of 0.8 Msun) to satisfactorily reproduce [O I] 6300, 6364 {\AA} and other emission lines of carbon, sodium, magnesium, and silicon, while 12 Msun and 19 Msun models under- and overproduce most of these lines, respectively. This result is in fair agreement with the mass derived from the progenitor detection, but in disagreement with hydrodynamical modeling of the early-time light curve. From modeling of the mid-infrared iron-group emission lines, we determine the density of the "Ni-bubble" to rho(t) = 7E-14*(t/100d)^-3 g cm^-3, corresponding to a filling factor of f = 0.15 in the metal core region (V = 1800 km/s). We also confirm that silicate dust, CO, and SiO emission are all present in the spectra.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The community-wide GPCR Dock assessment is conducted to evaluate the status of molecular modeling and ligand docking for human G protein-coupled receptors. The present round of the assessment was based on the recent structures of dopamine D3 and CXCR4 chemokine receptors bound to small molecule antagonists and CXCR4 with a synthetic cyclopeptide. Thirty-five groups submitted their receptor-ligand complex structure predictions prior to the release of the crystallographic coordinates. With closely related homology modeling templates, as for dopamine D3 receptor, and with incorporation of biochemical and QSAR data, modern computational techniques predicted complex details with accuracy approaching experimental. In contrast, CXCR4 complexes that had less-characterized interactions and only distant homology to the known GPCR structures still remained very challenging. The assessment results provide guidance for modeling and crystallographic communities in method development and target selection for further expansion of the structural coverage of the GPCR universe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a comparative newly-invented PKM with over-constraints in kinematic chains, the Exechon has attracted extensive attention from the research society. Different from the well-recognized kinematics analysis, the research on the stiffness characteristics of the Exechon still remains as a challenge due to the structural complexity. In order to achieve a thorough understanding of the stiffness characteristics of the Exechon PKM, this paper proposed an analytical kinetostatic model by using the substructure synthesis technique. The whole PKM system is decomposed into a moving platform subsystem, three limb subsystems and a fixed base subsystem, which are connected to each other sequentially through corresponding joints. Each limb body is modeled as a spatial beam with a uniform cross-section constrained by two sets of lumped springs. The equilibrium equation of each individual limb assemblage is derived through finite element formulation and combined with that of the moving platform derived with Newtonian method to construct the governing kinetostatic equations of the system after introducing the deformation compatibility conditions between the moving platform and the limbs. By extracting the 6 x 6 block matrix from the inversion of the governing compliance matrix, the stiffness of the moving platform is formulated. The computation for the stiffness of the Exechon PKM at a typical configuration as well as throughout the workspace is carried out in a quick manner with a piece-by-piece partition algorithm. The numerical simulations reveal a strong position-dependency of the PKM's stiffness in that it is symmetric relative to a work plane due to structural features. At the last stage, the effects of some design variables such as structural, dimensional and stiffness parameters on system rigidity are investigated with the purpose of providing useful information for the structural optimization and performance enhancement of the Exechon PKM. It is worthy mentioning that the proposed methodology of stiffness modeling in this paper can also be applied to other overconstrained PKMs and can evaluate the global rigidity over workplace efficiently with minor revisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low-power processors and accelerators that were originally designed for the embedded systems market are emerging as building blocks for servers. Power capping has been actively explored as a technique to reduce the energy footprint of high-performance processors. The opportunities and limitations of power capping on the new low-power processor and accelerator ecosystem are less understood. This paper presents an efficient power capping and management infrastructure for heterogeneous SoCs based on hybrid ARM/FPGA designs. The infrastructure coordinates dynamic voltage and frequency scaling with task allocation on a customised Linux system for the Xilinx Zynq SoC. We present a compiler-assisted power model to guide voltage and frequency scaling, in conjunction with workload allocation between the ARM cores and the FPGA, under given power caps. The model achieves less than 5% estimation bias to mean power consumption. In an FFT case study, the proposed power capping schemes achieve on average 97.5% of the performance of the optimal execution and match the optimal execution in 87.5% of the cases, while always meeting power constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermal comfort is defined as “that condition of mind which expresses satisfaction with the thermal environment’ [1] [2]. Field studies have been completed in order to establish the governing conditions for thermal comfort [3]. These studies showed that the internal climate of a room was the strongest factor in establishing thermal comfort. Direct manipulation of the internal climate is necessary to retain an acceptable level of thermal comfort. In order for Building Energy Management Systems (BEMS) strategies to be efficiently utilised it is necessary to have the ability to predict the effect that activating a heating/cooling source (radiators, windows and doors) will have on the room. The numerical modelling of the domain can be challenging due to necessity to capture temperature stratification and/or different heat sources (radiators, computers and human beings). Computational Fluid Dynamic (CFD) models are usually utilised for this function because they provide the level of details required. Although they provide the necessary level of accuracy these models tend to be highly computationally expensive especially when transient behaviour needs to be analysed. Consequently they cannot be integrated in BEMS. This paper presents and describes validation of a CFD-ROM method for real-time simulations of building thermal performance. The CFD-ROM method involves the automatic extraction and solution of reduced order models (ROMs) from validated CFD simulations. The test case used in this work is a room of the Environmental Research Institute (ERI) Building at the University College Cork (UCC). ROMs have shown that they are sufficiently accurate with a total error of less than 1% and successfully retain a satisfactory representation of the phenomena modelled. The number of zones in a ROM defines the size and complexity of that ROM. It has been observed that ROMs with a higher number of zones produce more accurate results. As each ROM has a time to solution of less than 20 seconds they can be integrated into the BEMS of a building which opens the potential to real time physics based building energy modelling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new thermography-based maximum power point tracking (MPPT) scheme to address photovoltaic (PV) partial shading faults. Solar power generation utilizes a large number of PV cells connected in series and in parallel in an array, and that are physically distributed across a large field. When a PV module is faulted or partial shading occurs, the PV system sees a nonuniform distribution of generated electrical power and thermal profile, and the generation of multiple maximum power points (MPPs). If left untreated, this reduces the overall power generation and severe faults may propagate, resulting in damage to the system. In this paper, a thermal camera is employed for fault detection and a new MPPT scheme is developed to alter the operating point to match an optimized MPP. Extensive data mining is conducted on the images from the thermal camera in order to locate global MPPs. Based on this, a virtual MPPT is set out to find the global MPP. This can reduce MPPT time and be used to calculate the MPP reference voltage. Finally, the proposed methodology is experimentally implemented and validated by tests on a 600-W PV array. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we extend the minimum-cost network flow approach to multi-target tracking, by incorporating a motion model, allowing the tracker to better cope with longterm occlusions and missed detections. In our new method, the tracking problem is solved iteratively: Firstly, an initial tracking solution is found without the help of motion information. Given this initial set of tracklets, the motion at each detection is estimated, and used to refine the tracking solution.
Finally, special edges are added to the tracking graph, allowing a further revised tracking solution to be found, where distant tracklets may be linked based on motion similarity. Our system has been tested on the PETS S2.L1 and Oxford town-center sequences, outperforming the baseline system, and achieving results comparable with the current state of the art.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The environmental fate of polybrominated diphenyl ethers (PBDEs), a group of flame retardants that are considered to be persistent organic pollutants (POPs), around the Zhuoshui River and Changhua County regions of Taiwan was assessed. An investigation into emissions, partitioning, and fate of selected PBDEs was conducted based on the equilibrium constant (EQC) fugacity model developed at Trent University, Canada. Emissions for congeners PBDE 47, PBDE 99, and PBDE 209 to air (4.9–92 × 10−3 kg/h), soil (0.91–17.4 × 10−3 kg/h), and water (0.21–4.04 × 10−3 kg/h), were estimated by modifying previous models on PBDE emission rates by considering both industrial and domestic rates. It was found that fugacity modeling can give a reasonable estimation of the behavior, partitioning, and concentrations of PBDE congeners in and around Taiwan. Results indicate that PBDE congeners have a high affinity for partitioning into sediments then soils. As congener number decreases, the PBDEs then partition more readily into air. As the degree of bromination increases, congeners more readily partition to sediments. Sediments may then act as a long-term source of PBDEs which can be released back into the water column due to resuspension during storm events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preclinical toxicity testing in animal models is a cornerstone of the drug development process, yet it is often unable to predict adverse effects and tolerability issues in human subjects. Species-specific responses to investigational drugs have led researchers to utilize human tissues and cells to better estimate human toxicity. Unfortunately, human cell-derived models are imperfect because toxicity is assessed in isolation, removed from the normal physiologic microenvironment. Microphysiological modeling often referred to as 'organ-on-a-chip' or 'human-on-a-chip' places human tissue into a microfluidic system that mimics the complexity of human in vivo physiology, thereby allowing for toxicity testing on several cell types, tissues, and organs within a more biologically relevant environment. Here we describe important concepts when developing a repro-on-a-chip model. The development of female and male reproductive microfluidic systems is critical to sex-based in vitro toxicity and drug testing. This review addresses the biological and physiological aspects of the male and female reproductive systems in vivo and what should be considered when designing a microphysiological human-on-a-chip model. Additionally, interactions between the reproductive tract and other systems are explored, focusing on the impact of factors and hormones produced by the reproductive tract and disease pathophysiology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design, fabrication, and measured results are presented for a reconfigurable reflectarray antenna based on liquid crystals (LCs)which operates above 100 GHz. The antenna has been designed to provide beam scanning capabilities over a wide angular range, a large bandwidth,and reduced side-lobe level (SLL). Measured radiation patterns are in good agreement with simulations, and show that the antenna generates an electronically steerable beam in one plane over an angular range of 55◦ in the frequency band from 96 to 104 GHz. The SLL is lower than −13 dB for all the scan angles and −18 dB is obtained over 16% of the scan range. The measured performance is significantly better than previously published results for this class of electronically tunable antenna, and moreover, veri-fies the accuracy of the proposed procedure for LC modeling and antenna design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new approach to single-channel speech enhancement involving both noise and channel distortion (i.e., convolutional noise). The approach is based on finding longest matching segments (LMS) from a corpus of clean, wideband speech. The approach adds three novel developments to our previous LMS research. First, we address the problem of channel distortion as well as additive noise. Second, we present an improved method for modeling noise. Third, we present an iterative algorithm for improved speech estimates. In experiments using speech recognition as a test with the Aurora 4 database, the use of our enhancement approach as a preprocessor for feature extraction significantly improved the performance of a baseline recognition system. In another comparison against conventional enhancement algorithms, both the PESQ and the segmental SNR ratings of the LMS algorithm were superior to the other methods for noisy speech enhancement. Index Terms: corpus-based speech model, longest matching segment, speech enhancement, speech recognition