797 resultados para Agent-Based Model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a flexible and integrated planning tool for active distribution network to maximise the benefits of having high level s of renewables, customer engagement, and new technology implementations. The tool has two main processing parts: “optimisation” and “forecast”. The “optimization” part is an automated and integrated planning framework to optimize the net present value (NPV) of investment strategy for electric distribution network augmentation over large areas and long planning horizons (e.g. 5 to 20 years) based on a modified particle swarm optimization (MPSO). The “forecast” is a flexible agent-based framework to produce load duration curves (LDCs) of load forecasts for different levels of customer engagement, energy storage controls, and electric vehicles (EVs). In addition, “forecast” connects the existing databases of utility to the proposed tool as well as outputs the load profiles and network plan in Google Earth. This integrated tool enables different divisions within a utility to analyze their programs and options in a single platform using comprehensive information.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Carrier phase ambiguity resolution over long baselines is challenging in BDS data processing. This is partially due to the variations of the hardware biases in BDS code signals and its dependence on elevation angles. We present an assessment of satellite-induced code bias variations in BDS triple-frequency signals and the ambiguity resolutions procedures involving both geometry-free and geometry-based models. First, since the elevation of a GEO satellite remains unchanged, we propose to model the single-differenced fractional cycle bias with widespread ground stations. Second, the effects of code bias variations induced by GEO, IGSO and MEO satellites on ambiguity resolution of extra-wide-lane, wide-lane and narrow-lane combinations are analyzed. Third, together with the IGSO and MEO code bias variations models, the effects of code bias variations on ambiguity resolution are examined using 30-day data collected over the baselines ranging from 500 to 2600 km in 2014. The results suggest that although the effect of code bias variations on the extra-wide-lane integer solution is almost ignorable due to its long wavelength, the wide-lane integer solutions are rather sensitive to the code bias variations. Wide-lane ambiguity resolution success rates are evidently improved when code bias variations are corrected. However, the improvement of narrow-lane ambiguity resolution is not obvious since it is based on geometry-based model and there is only an indirect impact on the narrow-lane ambiguity solutions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A detailed mechanics based model is developed to analyze the problem of structural instability in slender aerospace vehicles. Coupling among the rigid-body modes, the longitudinal vibrational modes and the transverse vibrational modes due to asymmetric lifting-body cross-section are considered. The model also incorporates the effects of aerodynamic pressure and the propulsive thrust of the vehicle. The model is one-dimensional, and it can be employed to idealized slender vehicles with complex shapes. Condition under which a flexible body with internal stress waves behaves like a perfect rigid body is derived. Two methods are developed for finite element discretization of the system: (1) A time-frequency Fourier spectral finite element method and (2) h-p finite element method. Numerical results using the above methods are presented in Part II of this paper. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Detecting Earnings Management Using Neural Networks. Trying to balance between relevant and reliable accounting data, generally accepted accounting principles (GAAP) allow, to some extent, the company management to use their judgment and to make subjective assessments when preparing financial statements. The opportunistic use of the discretion in financial reporting is called earnings management. There have been a considerable number of suggestions of methods for detecting accrual based earnings management. A majority of these methods are based on linear regression. The problem with using linear regression is that a linear relationship between the dependent variable and the independent variables must be assumed. However, previous research has shown that the relationship between accruals and some of the explanatory variables, such as company performance, is non-linear. An alternative to linear regression, which can handle non-linear relationships, is neural networks. The type of neural network used in this study is the feed-forward back-propagation neural network. Three neural network-based models are compared with four commonly used linear regression-based earnings management detection models. All seven models are based on the earnings management detection model presented by Jones (1991). The performance of the models is assessed in three steps. First, a random data set of companies is used. Second, the discretionary accruals from the random data set are ranked according to six different variables. The discretionary accruals in the highest and lowest quartiles for these six variables are then compared. Third, a data set containing simulated earnings management is used. Both expense and revenue manipulation ranging between -5% and 5% of lagged total assets is simulated. Furthermore, two neural network-based models and two linear regression-based models are used with a data set containing financial statement data from 110 failed companies. Overall, the results show that the linear regression-based models, except for the model using a piecewise linear approach, produce biased estimates of discretionary accruals. The neural network-based model with the original Jones model variables and the neural network-based model augmented with ROA as an independent variable, however, perform well in all three steps. Especially in the second step, where the highest and lowest quartiles of ranked discretionary accruals are examined, the neural network-based model augmented with ROA as an independent variable outperforms the other models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Flexible Manufacturing Systems (FMS), widely considered as the manufacturing technology of the future, are gaining increasing importance due to the immense advantages they provide in terms of cost, quality and productivity over the conventional manufacturing. An FMS is a complex interconnection of capital intensive resources and high levels of system performance is very crucial for survival in a competing environment.Discrete event simulation is one of the most popular methods for performance evaluation of FMS during planning, design and operation phases. Indeed fast simulators are suggested for selection of optimal strategies for flow control (which part type to enter and at what instant), AGV scheduling (which vehicle to carry which part), routing (which machine to process the part) and part selection (which part for processing next). In this paper we develop a C-net based model for an FMS and use the same for distributed discrete event simulation. We illustrate using examples the efficacy of destributed discrete event simulation for the performance evaluation of FMSs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The diversity order and coding gain are crucial for the performance of a multiple antenna communication system. It is known that space-time trellis codes (STTC) can be used to achieve these objectives. In particular, we can use STTCs to obtain large coding gains. Many attempts have been made to construct STTCs which achieve full-diversity and good coding gains, though a general method of construction does not exist. Delay diversity code (rate-1) is known to achieve full-diversity, for any number of transmit antennas and any signal set, but does not give a good coding gain. A product distance code based delay diversity scheme (Tarokh, V. et al., IEEE Trans. Inform. Theory, vol.44, p.744-65, 1998) enables one to improve the coding gain and construct STTCs for any given number of states using coding in conjunction with delay diversity; it was stated as an open problem. We achieve such a construction. We assume a shift register based model to construct an STTC for any state complexity. We derive a sufficient condition for this STTC to achieve full-diversity, based on the delay diversity scheme. This condition provides a framework to do coding in conjunction with delay diversity for any signal constellation. Using this condition, we provide a formal rate-1 STTC construction scheme for PSK signal sets, for any number of transmit antennas and any given number of states, which achieves full-diversity and gives a good coding gain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we propose a new algorithm for learning polyhedral classifiers. In contrast to existing methods for learning polyhedral classifier which solve a constrained optimization problem, our method solves an unconstrained optimization problem. Our method is based on a logistic function based model for the posterior probability function. We propose an alternating optimization algorithm, namely, SPLA1 (Single Polyhedral Learning Algorithm1) which maximizes the loglikelihood of the training data to learn the parameters. We also extend our method to make it independent of any user specified parameter (e.g., number of hyperplanes required to form a polyhedral set) in SPLA2. We show the effectiveness of our approach with experiments on various synthetic and real world datasets and compare our approach with a standard decision tree method (OC1) and a constrained optimization based method for learning polyhedral sets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work focuses on the formulation of an asymptotically correct theory for symmetric composite honeycomb sandwich plate structures. In these panels, transverse stresses tremendously influence design. The conventional 2-D finite elements cannot predict the thickness-wise distributions of transverse shear or normal stresses and 3-D displacements. Unfortunately, the use of the more accurate three-dimensional finite elements is computationally prohibitive. The development of the present theory is based on the Variational Asymptotic Method (VAM). Its unique features are the identification and utilization of additional small parameters associated with the anisotropy and non-homogeneity of composite sandwich plate structures. These parameters are ratios of smallness of the thickness of both facial layers to that of the core and smallness of 3-D stiffness coefficients of the core to that of the face sheets. Finally, anisotropy in the core and face sheets is addressed by the small parameters within the 3-D stiffness matrices. Numerical results are illustrated for several sample problems. The 3-D responses recovered using VAM-based model are obtained in a much more computationally efficient manner than, and are in agreement with, those of available 3-D elasticity solutions and 3-D FE solutions of MSC NASTRAN. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We report on the threshold voltage modeling of ultra-thin (1 nm-5 nm) silicon body double-gate (DG) MOSFETs using self-consistent Poisson-Schrodinger solver (SCHRED). We define the threshold voltage (V th) of symmetric DG MOSFETs as the gate voltage at which the center potential (Φ c) saturates to Φ c (s a t), and analyze the effects of oxide thickness (t ox) and substrate doping (N A) variations on V th. The validity of this definition is demonstrated by comparing the results with the charge transition (from weak to strong inversion) based model using SCHRED simulations. In addition, it is also shown that the proposed V t h definition, electrically corresponds to a condition where the inversion layer capacitance (C i n v) is equal to the oxide capacitance (C o x) across a wide-range of substrate doping densities. A capacitance based analytical model based on the criteria C i n v C o x is proposed to compute Φ c (s a t), while accounting for band-gap widening. This is validated through comparisons with the Poisson-Schrodinger solution. Further, we show that at the threshold voltage condition, the electron distribution (n(x)) along the depth (x) of the silicon film makes a transition from a strong single peak at the center of the silicon film to the onset of a symmetric double-peak away from the center of the silicon film. © 2012 American Institute of Physics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Theoretical and computational frameworks for synaptic plasticity and learning have a long and cherished history, with few parallels within the well-established literature for plasticity of voltage-gated ion channels. In this study, we derive rules for plasticity in the hyperpolarization-activated cyclic nucleotide-gated (HCN) channels, and assess the synergy between synaptic and HCN channel plasticity in establishing stability during synaptic learning. To do this, we employ a conductance-based model for the hippocampal pyramidal neuron, and incorporate synaptic plasticity through the well-established Bienenstock-Cooper-Munro (BCM)-like rule for synaptic plasticity, wherein the direction and strength of the plasticity is dependent on the concentration of calcium influx. Under this framework, we derive a rule for HCN channel plasticity to establish homeostasis in synaptically-driven firing rate, and incorporate such plasticity into our model. In demonstrating that this rule for HCN channel plasticity helps maintain firing rate homeostasis after bidirectional synaptic plasticity, we observe a linear relationship between synaptic plasticity and HCN channel plasticity for maintaining firing rate homeostasis. Motivated by this linear relationship, we derive a calcium-dependent rule for HCN-channel plasticity, and demonstrate that firing rate homeostasis is maintained in the face of synaptic plasticity when moderate and high levels of cytosolic calcium influx induced depression and potentiation of the HCN-channel conductance, respectively. Additionally, we show that such synergy between synaptic and HCN-channel plasticity enhances the stability of synaptic learning through metaplasticity in the BCM-like synaptic plasticity profile. Finally, we demonstrate that the synergistic interaction between synaptic and HCN-channel plasticity preserves robustness of information transfer across the neuron under a rate-coding schema. Our results establish specific physiological roles for experimentally observed plasticity in HCN channels accompanying synaptic plasticity in hippocampal neurons, and uncover potential links between HCN-channel plasticity and calcium influx, dynamic gain control and stable synaptic learning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a networked society, governing advocacy groups and networks through decentralized systems of policy implementation has been the interest of governance network literature. This paper addresses the topic of governing networks in the context of Indian agrarian societies by taking the case example of a welfare scheme for the Indian rural poor. We explore context-specific regulatory dynamics through the situated agent based architectural framework. The effects of various regulatory strategies that can be adopted by governing node are tested under various action arenas through experimental design. Results show the impact of regulatory strategies on the resource dependencies and asymmetries in the network relationships. This indicates that the optimal feasible regulatory strategy in networked society is institutionally rational and is context dependent. Further, we show that situated MAS architecture is a natural fit for institutional understanding of the dynamics (Ostrom et al. in Rules, games, and common-pool resources, 1994).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Two atmospheric inversions (one fine-resolved and one process-discriminating) and a process-based model for land surface exchanges are brought together to analyse the variations of methane emissions from 1990 to 2009. A focus is put on the role of natural wetlands and on the years 2000-2006, a period of stable atmospheric concentrations. From 1990 to 2000, the top-down and bottom-up visions agree on the time-phasing of global total and wetland emission anomalies. The process-discriminating inversion indicates that wetlands dominate the time-variability of methane emissions (90% of the total variability). The contribution of tropical wetlands to the anomalies is found to be large, especially during the post-Pinatubo years (global negative anomalies with minima between -41 and -19 Tg yr(-1) in 1992) and during the alternate 1997-1998 El-Nino/1998-1999 La-Nina (maximal anomalies in tropical regions between +16 and +22 Tg yr(-1) for the inversions and anomalies due to tropical wetlands between +12 and +17 Tg yr(-1) for the process-based model). Between 2000 and 2006, during the stagnation of methane concentrations in the atmosphere, the top-down and bottom-up approaches agree on the fact that South America is the main region contributing to anomalies in natural wetland emissions, but they disagree on the sign and magnitude of the flux trend in the Amazon basin. A negative trend (-3.9 +/- 1.3 Tg yr(-1)) is inferred by the process-discriminating inversion whereas a positive trend (+1.3 +/- 0.3 Tg yr(-1)) is found by the process model. Although processed-based models have their own caveats and may not take into account all processes, the positive trend found by the B-U approach is considered more likely because it is a robust feature of the process-based model, consistent with analysed precipitations and the satellite-derived extent of inundated areas. On the contrary, the surface-data based inversions lack constraints for South America. This result suggests the need for a re-interpretation of the large increase found in anthropogenic methane inventories after 2000.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several statistical downscaling models have been developed in the past couple of decades to assess the hydrologic impacts of climate change by projecting the station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs). This paper presents and compares different statistical downscaling models that use multiple linear regression (MLR), positive coefficient regression (PCR), stepwise regression (SR), and support vector machine (SVM) techniques for estimating monthly rainfall amounts in the state of Florida. Mean sea level pressure, air temperature, geopotential height, specific humidity, U wind, and V wind are used as the explanatory variables/predictors in the downscaling models. Data for these variables are obtained from the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis dataset and the Canadian Centre for Climate Modelling and Analysis (CCCma) Coupled Global Climate Model, version 3 (CGCM3) GCM simulations. The principal component analysis (PCA) and fuzzy c-means clustering method (FCM) are used as part of downscaling model to reduce the dimensionality of the dataset and identify the clusters in the data, respectively. Evaluation of the performances of the models using different error and statistical measures indicates that the SVM-based model performed better than all the other models in reproducing most monthly rainfall statistics at 18 sites. Output from the third-generation CGCM3 GCM for the A1B scenario was used for future projections. For the projection period 2001-10, MLR was used to relate variables at the GCM and NCEP grid scales. Use of MLR in linking the predictor variables at the GCM and NCEP grid scales yielded better reproduction of monthly rainfall statistics at most of the stations (12 out of 18) compared to those by spatial interpolation technique used in earlier studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Robotic surgical tools used in minimally invasive surgeries (MIS) require miniaturized and reliable actuators for precise positioning and control of the end-effector. Miniature pneumatic artificial muscles (MPAMs) are a good choice due to their inert nature, high force to weight ratio, and fast actuation. In this paper, we present the development of miniaturized braided pneumatic muscles with an outer diameter of similar to 1.2 mm, a high contraction ratio of about 18%, and capable of providing a pull force in excess of 4 N at a supply pressure of 0.8 MPa. We present the details of the developed experimental setup, experimental data on contraction and force as a function of applied pressure, and characterization of the MPAM. We also present a simple kinematics and experimental data based model of the braided pneumatic muscle and show that the model predicts contraction in length to within 20% of the measured value. Finally, a robust controller for the MPAMs is developed and validated with experiments and it is shown that the MPAMs have a time constant of similar to 10 ms thereby making them suitable for actuating endoscopic and robotic surgical tools.