959 resultados para Polynomial distributed lag models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The report explores the problem of detecting complex point target models in a MIMO radar system. A complex point target is a mathematical and statistical model for a radar target that is not resolved in space, but exhibits varying complex reflectivity across the different bistatic view angles. The complex reflectivity can be modeled as a complex stochastic process whose index set is the set of all the bistatic view angles, and the parameters of the stochastic process follow from an analysis of a target model comprising a number of ideal point scatterers randomly located within some radius of the targets center of mass. The proposed complex point targets may be applicable to statistical inference in multistatic or MIMO radar system. Six different target models are summarized here – three 2-dimensional (Gaussian, Uniform Square, and Uniform Circle) and three 3-dimensional (Gaussian, Uniform Cube, and Uniform Sphere). They are assumed to have different distributions on the location of the point scatterers within the target. We develop data models for the received signals from such targets in the MIMO radar system with distributed assets and partially correlated signals, and consider the resulting detection problem which reduces to the familiar Gauss-Gauss detection problem. We illustrate that the target parameter and transmit signal have an influence on the detector performance through target extent and the SNR respectively. A series of the receiver operator characteristic (ROC) curves are generated to notice the impact on the detector for varying SNR. Kullback–Leibler (KL) divergence is applied to obtain the approximate mean difference between density functions the scatterers assume inside the target models to show the change in the performance of the detector with target extent of the point scatterers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation presents the competitive control methodologies for small-scale power system (SSPS). A SSPS is a collection of sources and loads that shares a common network which can be isolated during terrestrial disturbances. Micro-grids, naval ship electric power systems (NSEPS), aircraft power systems and telecommunication system power systems are typical examples of SSPS. The analysis and development of control systems for small-scale power systems (SSPS) lacks a defined slack bus. In addition, a change of a load or source will influence the real time system parameters of the system. Therefore, the control system should provide the required flexibility, to ensure operation as a single aggregated system. In most of the cases of a SSPS the sources and loads must be equipped with power electronic interfaces which can be modeled as a dynamic controllable quantity. The mathematical formulation of the micro-grid is carried out with the help of game theory, optimal control and fundamental theory of electrical power systems. Then the micro-grid can be viewed as a dynamical multi-objective optimization problem with nonlinear objectives and variables. Basically detailed analysis was done with optimal solutions with regards to start up transient modeling, bus selection modeling and level of communication within the micro-grids. In each approach a detail mathematical model is formed to observe the system response. The differential game theoretic approach was also used for modeling and optimization of startup transients. The startup transient controller was implemented with open loop, PI and feedback control methodologies. Then the hardware implementation was carried out to validate the theoretical results. The proposed game theoretic controller shows higher performances over traditional the PI controller during startup. In addition, the optimal transient surface is necessary while implementing the feedback controller for startup transient. Further, the experimental results are in agreement with the theoretical simulation. The bus selection and team communication was modeled with discrete and continuous game theory models. Although players have multiple choices, this controller is capable of choosing the optimum bus. Next the team communication structures are able to optimize the players’ Nash equilibrium point. All mathematical models are based on the local information of the load or source. As a result, these models are the keys to developing accurate distributed controllers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis will present strategies for the use of plug-in electric vehicles on smart and microgrids. MATLAB is used as the design tool for all models and simulations. First, a scenario will be explored using the dispatchable loads of electric vehicles to stabilize a microgrid with a high penetration of renewable power generation. Grid components for a microgrid with 50% photovoltaic solar production will be sized through an optimization routine to maintain storage system, load, and vehicle states over a 24-hour period. The findings of this portion are that the dispatchable loads can be used to guard against unpredictable losses in renewable generation output. Second, the use of distributed control strategies for the charging of electric vehicles utilizing an agent-based approach on a smart grid will be studied. The vehicles are regarded as additional loads to a primary forecasted load and use information transfer with the grid to make their charging decisions. Three lightweight control strategies and their effects on the power grid will be presented. The findings are that the charging behavior and peak loads on the grid can be reduced through the use of distributed control strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional(3D)model of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT ONTOLOGIES AND METHODS FOR INTEROPERABILITY OF ENGINEERING ANALYSIS MODELS (EAMS) IN AN E-DESIGN ENVIRONMENT SEPTEMBER 2007 NEELIMA KANURI, B.S., BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCES PILANI INDIA M.S., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Ian Grosse Interoperability is the ability of two or more systems to exchange and reuse information efficiently. This thesis presents new techniques for interoperating engineering tools using ontologies as the basis for representing, visualizing, reasoning about, and securely exchanging abstract engineering knowledge between software systems. The specific engineering domain that is the primary focus of this report is the modeling knowledge associated with the development of engineering analysis models (EAMs). This abstract modeling knowledge has been used to support integration of analysis and optimization tools in iSIGHT FD , a commercial engineering environment. ANSYS , a commercial FEA tool, has been wrapped as an analysis service available inside of iSIGHT-FD. Engineering analysis modeling (EAM) ontology has been developed and instantiated to form a knowledge base for representing analysis modeling knowledge. The instances of the knowledge base are the analysis models of real world applications. To illustrate how abstract modeling knowledge can be exploited for useful purposes, a cantilever I-Beam design optimization problem has been used as a test bed proof-of-concept application. Two distinct finite element models of the I-beam are available to analyze a given beam design- a beam-element finite element model with potentially lower accuracy but significantly reduced computational costs and a high fidelity, high cost, shell-element finite element model. The goal is to obtain an optimized I-beam design at minimum computational expense. An intelligent KB tool was developed and implemented in FiPER . This tool reasons about the modeling knowledge to intelligently shift between the beam and the shell element models during an optimization process to select the best analysis model for a given optimization design state. In addition to improved interoperability and design optimization, methods are developed and presented that demonstrate the ability to operate on ontological knowledge bases to perform important engineering tasks. One such method is the automatic technical report generation method which converts the modeling knowledge associated with an analysis model to a flat technical report. The second method is a secure knowledge sharing method which allocates permissions to portions of knowledge to control knowledge access and sharing. Both the methods acting together enable recipient specific fine grain controlled knowledge viewing and sharing in an engineering workflow integration environment, such as iSIGHT-FD. These methods together play a very efficient role in reducing the large scale inefficiencies existing in current product design and development cycles due to poor knowledge sharing and reuse between people and software engineering tools. This work is a significant advance in both understanding and application of integration of knowledge in a distributed engineering design framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing environmental conditions and number of users, application performance might suffer, leading to Service Level Agreement (SLA) violations and inefficient use of hardware resources. We introduce a system for controlling the complexity of scaling applications composed of multiple services using mechanisms based on fulfillment of SLAs. We present how service monitoring information can be used in conjunction with service level objectives, predictions, and correlations between performance indicators for optimizing the allocation of services belonging to distributed applications. We validate our models using experiments and simulations involving a distributed enterprise information system. We show how discovering correlations between application performance indicators can be used as a basis for creating refined service level objectives, which can then be used for scaling the application and improving the overall application's performance under similar conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most statistical analysis, theory and practice, is concerned with static models; models with a proposed set of parameters whose values are fixed across observational units. Static models implicitly assume that the quantified relationships remain the same across the design space of the data. While this is reasonable under many circumstances this can be a dangerous assumption when dealing with sequentially ordered data. The mere passage of time always brings fresh considerations and the interrelationships among parameters, or subsets of parameters, may need to be continually revised. ^ When data are gathered sequentially dynamic interim monitoring may be useful as new subject-specific parameters are introduced with each new observational unit. Sequential imputation via dynamic hierarchical models is an efficient strategy for handling missing data and analyzing longitudinal studies. Dynamic conditional independence models offers a flexible framework that exploits the Bayesian updating scheme for capturing the evolution of both the population and individual effects over time. While static models often describe aggregate information well they often do not reflect conflicts in the information at the individual level. Dynamic models prove advantageous over static models in capturing both individual and aggregate trends. Computations for such models can be carried out via the Gibbs sampler. An application using a small sample repeated measures normally distributed growth curve data is presented. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present BitWorker, a platform for community distributed computing based on BitTorrent. Any splittable task can be easily specified by a user in a meta-information task file, such that it can be downloaded and performed by other volunteers. Peers find each other using Distributed Hash Tables, download existing results, and compute missing ones. Unlike existing distributed computing schemes relying on centralized coordination point(s), our scheme is totally distributed, therefore, highly robust. We evaluate the performance of BitWorker using mathematical models and real tests, showing processing and robustness gains. BitWorker is available for download and use by the community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The convergence between the Eurasian and Arabian plates has created a complicated structural setting in the Eastern Turkish high plateau (ETHP), particularly around the Karlıova Triple Junction (KTJ) where the Eurasian, Arabian, and Anatolian plates intersect. This region of interest includes the junction of the North Anatolian Shear Zone (NASZ) and the East Anatolian Shear Zone (EASZ), which forms the northern border of the westwardly extruding Anatolian Scholle and the western boundary of the ETHP, respectively. In this study, we focused on a poorly studied component of the KTJ, the Varto Fault Zone (VFZ), and the adjacent secondary structures, which have complex structural settings. Through integrated analyses of remote sensing and field observations, we identified a widely distributed transpressional zone where the Varto segment of the VFZ forms the most northern boundary. The other segments, namely, the Leylekdağ and Çayçatı segments, are oblique-reverse faults that are significantly defined by uplifted topography along their strikes. The measured 515 and 265 m of cumulative uplifts for Mt. Leylek and Mt. Dodan, respectively, yield a minimum uplift rate of 0.35 mm/a for the last 2.2 Ma. The multi-oriented secondary structures were mostly correlated with “the distributed strike-slip” and “the distributed transpressional” in analogue experiments. The misfits in strike of some of secondary faults between our observations and the experimental results were justified by about 20° to 25° clockwise restoration of all relevant structures that were palaeomagnetically measured to have happened since ~ 2.8 Ma ago. Our detected fault patterns and their true nature are well aligned as being part of a transpressional tectonic setting that supports previously suggested stationary triple junction models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Here we present an improved astronomical timescale since 5 Ma as recorded in the ODP Site 1143 in the southern South China Sea, using a recently published Asian summer monsoon record (hematite to goethite content ratio, Hm/Gt) and a parallel benthic d18O record. Correlation of the benthic d18O record to the stack of 57 globally distributed benthic d18O records (LR04 stack) and the Hm/Gt curve to the 65°N summer insolation curve is a particularly useful approach to obtain refined timescales. Hence, it constitutes the basis for our effort. Our proposed modifications result in a more accurate and robust chronology than the existing astronomical timescale for the ODP Site 1143. This updated timescale further enables a detailed study of the orbital variability of low-latitude Asian summer monsoon throughout the Plio-Pleistocene. Comparison of the Hm/Gt record with the d18O record from the same core reveals that the oscillations of low-latitude Asian summer monsoon over orbital scales differed considerably from the glacial-interglacial climate cycles. The popular view that summer monsoon intensifies during interglacial stages and weakens during glacial stages appears to be too simplistic for low-latitude Asia. In low-latitude Asia, some strong summer monsoon intervals appear to have also occurred during glacial stages in addition to their increased occurrence during interglacial stages. Vice versa, some notably weak summer monsoon intervals have also occurred during interglacial stages next to their anticipated occurrence during glacial stages. The well-known mid-Pleistocene transition (MPT) is only identified in the benthic d18O record but not in the Hm/Gt record from the same core. This suggests that the MPT may be a feature of high- and middle-latitude climates, possibly determined by high-latitude ice sheet dynamics. For low-latitude monsoonal climate, its orbital-scale variations respond more directly to insolation and are little influenced by high-latitude processes, thus the MPT is likely not recorded. In addition, the Hm/Gt record suggests that low-latitude Asian summer monsoon intensity has a long-term decreasing trend since 2.8 Ma with increased oscillation amplitude. This long-term variability is presumably linked to the Northern Hemisphere glaciation since then.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on the faunal record of planktonic foraminifers in three long gravity sediment cores from the eastern equatorial Atlantic, the sea-surface temperature history ove the last 750,000 years was studied at a resolution of 3,000 to 10,000 years. Detailed oxygen-isotope and paleomagnetic stratigraphy helped to identify the following major faunal events: Globorotaloides hexagonus and Globorotalia tumida flexuosa became extinct in the eastern tropical Atlantic at the isotope stage 4/5 boundary, now dated at 68,000 years B.P. The persistent occurrence of the pink variety of Globigerinoides ruber started during the late stage 12 at 410,000 years B.P. CARTUNE-age. This datum may provide an easily detectible faunal stratigraphic marker for the mid-Brunhes Chron. The updated scheme of the Ericson zones helped the recognition of a hiatus at the northwestern slope of the Sierra Leone Basin covering oxygen-isotope stages 10 to 12. Classifying the planktonic foraminifer counts into six faunal assemblages, according to the factor analysis derived model of Pflaumann (1985), the tropical and the tropical-upwelling communities account for 57 % at Site 16415, and 86 % at Site 13519, respectively of the variance of the faunal record. A largely continuous paleotemperature record for both winter and summer seasons was obtained from the top of the Sierra Leone Rise with the winter temperatures ranging between 20 and 25 °C, and the summer ones between 24 and 30 °C. The record of cores from greater water depths is frequently interrupted by samples with no-analogue faunal communities and/or poor preservation. Based on the seasonality signal, during cold periods the termal equator shifted to a geographically mnore asymmetrical northern position. Dissolution altering the faunal communities becomes stronger with greater water depth, the estimated mean minimum loss of specimens increases from 70 % to 80 % between 2,860 and 3,850 water depth although some species will be more susceptible than others. Enhanced dissolution occured during stage 4 but also during cold phases in the warm stage 7 and 9. Correlations between the Foraminiferal Dissolution Index and the estimated sea-surface temperatures are significant. Foraminiferal flux rates, negatively correlated to the flux rates of organic carbon and of diatoms, may be a result of enhanced dissolution during cold stages, destroying still more of the faunal signal than indicated by the calculated minimum loss. The fluctuations of the oxygen-isotope curves and the hibernal sea-surfave temperatures are fairly coherent. During warm oxygen-isotope stages the temperature maxima lag often by 5 to 15 ka behind the respective sotope minima. During cold stages, sea-surface temperature changes are partly out of phase and contain additional fluctuations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Constraining the nature of Antarctic Ice Sheet (AIS) response to major past climate changes may provide a window onto future ice response and rates of sea level rise. One approach to tracking AIS dynamics, and differentiating whole system versus potentially heterogeneous ice sheet sector changes, is to integrate multiple climate proxies for a specific time slice across widely distributed locations. This study presents new iceberg-rafted debris (IRD) data across the interval that includes Marine Isotope Stage 31 (MIS 31: 1.081-1.062 Ma, a span of ~19 kyr; Lisiecki and Raymo, 2005), which lies on the cusp of the mid-Brunhes climate transition (as glacial cycles shifted from ~41,000 yr to ~100,000 yr duration). Two sites are studied - distal Ocean Drilling Program (ODP) Leg 177 Site 1090 (Site 1090) in the eastern subantarctic sector of the South Atlantic Ocean, and proximal ODP Leg 188 Site 1165 (Site 1165), near Prydz Bay, in the Indian Ocean sector of the Antarctic margin. At each of these sites, MIS 31 is marked by the presence of the Jaramillo Subchron (0.988-1.072 Ma; Lourens et al., 2004) which provides a time-marker to correlate these two sites with relative precision. At both sites, records of multiple climate proxies are available to aid in interpretation. The presence of IRD in sediments from our study areas, which include garnets indicating a likely East Antarctic Ice Sheet (EAIS) origin, supports the conclusion that although the EAIS apparently withdrew significantly over MIS 31 in the Prydz Bay region and other sectors, some sectors of the EAIS must still have maintained marine margins capable of launching icebergs even through the warmest intervals. Thus, the EAIS did not respond in complete synchrony even to major climate changes such as MIS 31. Further, the record at Site 1090 (supported by records from other subantarctic locations) indicates that the glacial MIS 32 should be reduced to no more than a stadial, and the warm interval of Antarctic ice retreat that includes MIS 31 should be expanded to MIS 33-31. This revised warm interval lasted about 52 kyr, in line with several other interglacials in the benthic d18O records stack of Lisiecki and Raymo (2005), including the super-interglacials MIS 11 (duration of 50 kyr) and MIS 5 (duration of 59 kyr). The record from Antarctica-proximal Site 1165, when interpreted in accord with the record from ANDRILL-1B, indicates that in these southern high latitude sectors, ice sheet retreat and the effects of warming lasted longer than at Site 1090, perhaps until MIS 27. In the current interpretations of the age models of the proximal sites, ice sheet retreat began relatively slowly, and was not really evident until the start of MIS 31. In another somewhat more speculative interpretation, ice sheet retreat began noticeably with MIS 33, and accelerated during MIS 31. Ice sheet inertia (the lag-times in the large-scale responses of major ice sheets to a forcing) likely plays an important part in the timing and scale of these events in vulnerable sectors of the AIS.