910 resultados para wot,iot,iot-system,digital-twin,framework,least-squares
Resumo:
Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.
Resumo:
A new control scheme has been presented in this thesis. Based on the NonLinear Geometric Approach, the proposed Active Control System represents a new way to see the reconfigurable controllers for aerospace applications. The presence of the Diagnosis module (providing the estimation of generic signals which, based on the case, can be faults, disturbances or system parameters), mean feature of the depicted Active Control System, is a characteristic shared by three well known control systems: the Active Fault Tolerant Controls, the Indirect Adaptive Controls and the Active Disturbance Rejection Controls. The standard NonLinear Geometric Approach (NLGA) has been accurately investigated and than improved to extend its applicability to more complex models. The standard NLGA procedure has been modified to take account of feasible and estimable sets of unknown signals. Furthermore the application of the Singular Perturbations approximation has led to the solution of Detection and Isolation problems in scenarios too complex to be solved by the standard NLGA. Also the estimation process has been improved, where multiple redundant measuremtent are available, by the introduction of a new algorithm, here called "Least Squares - Sliding Mode". It guarantees optimality, in the sense of the least squares, and finite estimation time, in the sense of the sliding mode. The Active Control System concept has been formalized in two controller: a nonlinear backstepping controller and a nonlinear composite controller. Particularly interesting is the integration, in the controller design, of the estimations coming from the Diagnosis module. Stability proofs are provided for both the control schemes. Finally, different applications in aerospace have been provided to show the applicability and the effectiveness of the proposed NLGA-based Active Control System.
Resumo:
Objective This article seeks to explain the puzzle of why incumbents spend so much on campaigns despite most research finding that their spending has almost no effect on voters. Methods The article uses ordinary least squares, instrumental variables, and fixed-effects regression to estimate the impact of incumbent spending on election outcomes. The estimation includes an interaction term between incumbent and challenger spending to allow the effect of incumbent spending to depend on the level of challenger spending. Results The estimation provides strong evidence that spending by the incumbent has a larger positive impact on votes received the more money the challenger spends. Conclusion Campaign spending by incumbents is most valuable in the races where the incumbent faces a serious challenge. Raising large sums of money to be used in close races is thus a rational choice by incumbents.
Resumo:
This thesis examines two panel data sets of 48 states from 1981 to 2009 and utilizes ordinary least squares (OLS) and fixed effects models to explore the relationship between rural Interstate speed limits and fatality rates and whether rural Interstate speed limits affect non-Interstate safety. Models provide evidence that rural Interstate speed limits higher than 55 MPH lead to higher fatality rates on rural Interstates though this effect is somewhat tempered by reductions in fatality rates for roads other than rural Interstates. These results provide some but not unanimous support for the traffic diversion hypothesis that rural Interstate speed limit increases lead to decreases in fatality rates of other roads. To the author’s knowledge, this paper is the first econometric study to differentiate between the effects of 70 MPH speed limits and speed limits above 70 MPH on fatality rates using a multi-state data set. Considering both rural Interstates and other roads, rural Interstate speed limit increases above 55 MPH are responsible for 39,700 net fatalities, 4.1 percent of total fatalities from 1987, the year limits were first raised, to 2009.
Resumo:
Carbon dioxide (CO2) has been of recent interest due to the issue of greenhouse cooling in the upper atmosphere by species such as CO2 and NO. In the Earth’s upper atmosphere, between altitudes of 75 and 110 km, a collisional energy exchange occurs between CO2 and atomic oxygen, which promotes a population of ground state CO2 to the bend excited state. The relaxation of CO2 following this excitation is characterized by spontaneous emission of 15-μm. Most of this energy is emitted away from Earth. Due to the low density in the upper atmosphere, most of this energy is not reabsorbed and thus escapes into space, leading to a local cooling effect in the upper atmosphere. To determine the efficiency of the CO2- O atom collisional energy exchange, transient diode laser absorption spectroscopy was used to monitor the population of the first vibrationally excited state, 13CO2(0110) or ν2, as a function of time. The rate coefficient, kO(ν2), for the vibrational relaxation 13CO2 (ν2)-O was determined by fitting laboratory measurements using a home-written linear least squares algorithm. The rate coefficient, kO(ν2), of the vibrational relaxation of 13CO2(ν2), by atomic oxygen at room temperature was determined to be (1.6 ± 0.3 x 10-12 cm3 s-1), which is within the uncertainty of the rate coefficient previously found in this group for 12CO2(ν2) relaxation. The cold temperature kO(ν2) values were determined to be: (2.1 ± 0.8) x 10-12 cm3 s-1 at Tfinal = 274 K, (1.8 ± 0.3) x 10-12 cm3 s-1 at Tfinal = 239 K, (2 ± 1) x 10-12 cm3 s-1 at Tfinal = 208 K, and (1.7 ± 0.3) x 10-12 cm3 s-1 at Tfinal = 186 K. These data did not show a definitive negative temperature dependence comparable to that found for 12CO2 previously.
Resumo:
Transmissible spongiform encephalopathies (TSE) form a group of human and animal diseases that share common features such as (a) distinct pathological lesions in the central nervous system, (b) transmissibility at least in experimental settings, and (c) a long incubation period. Considerable differences exist in the host range of individual TSEs, their routes of transmission, and factors influencing the host susceptibility (such as genotype). The objective of this review was to briefly describe the main epidemiological features of TSEs with emphasis on small ruminant (sheep, goats) TSE, bovine spongiform encephalopathy (BSE) in cattle and chronic wasting disease (CWD) in deer and elk.
Resumo:
Generalized linear mixed models (GLMMs) provide an elegant framework for the analysis of correlated data. Due to the non-closed form of the likelihood, GLMMs are often fit by computational procedures like penalized quasi-likelihood (PQL). Special cases of these models are generalized linear models (GLMs), which are often fit using algorithms like iterative weighted least squares (IWLS). High computational costs and memory space constraints often make it difficult to apply these iterative procedures to data sets with very large number of cases. This paper proposes a computationally efficient strategy based on the Gauss-Seidel algorithm that iteratively fits sub-models of the GLMM to subsetted versions of the data. Additional gains in efficiency are achieved for Poisson models, commonly used in disease mapping problems, because of their special collapsibility property which allows data reduction through summaries. Convergence of the proposed iterative procedure is guaranteed for canonical link functions. The strategy is applied to investigate the relationship between ischemic heart disease, socioeconomic status and age/gender category in New South Wales, Australia, based on outcome data consisting of approximately 33 million records. A simulation study demonstrates the algorithm's reliability in analyzing a data set with 12 million records for a (non-collapsible) logistic regression model.
Resumo:
Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.
Resumo:
Heroin prices are a reflection of supply and demand, and similar to any other market, profits motivate participation. The intent of this research is to examine the change in Afghan opium production due to political conflict affecting Europe’s heroin market and government policies. If the Taliban remain in power, or a new Afghan government is formed, the changes will affect the heroin market in Europe to a certain degree. In the heroin market, the degree of change is dependent on many socioeconomic forces such as law enforcement, corruption, and proximity to Afghanistan. An econometric model that examines the degree of these socioeconomic effects has not been applied to the heroin trade in Afghanistan before. This research uses a two-stage least squares econometric model to reveal the supply and demand of heroin in 36 different countries from the Middle East to Western Europe in 2008. An application of the two-stage least squares model to the heroin market in Europe will attempt to predict the socioeconomic consequences of Afghanistan opium production.
Resumo:
A great increase of private car ownership took place in China from 1980 to 2009 with the development of the economy. To explain the relationship between car ownership and economic and social changes, an ordinary least squares linear regression model is developed using car ownership per capita as the dependent variable with GDP, savings deposits and highway mileages per capita as the independent variables. The model is tested and corrected for econometric problems such as spurious correlation and cointegration. Finally, the regression model is used to project oil consumption by the Chinese transportation sector through 2015. The result shows that about 2.0 million barrels of oil will be consumed by private cars in conservative scenario, and about 2.6 million barrels of oil per day in high case scenario in 2015. Both of them are much higher than the consumption level of 2009, which is 1.9 million barrels per day. It also shows that the annual growth rate of oil demand by transportation is 2.7% - 3.1% per year in the conservative scenario, and 6.9% - 7.3% per year in the high case forecast scenario from 2010 to 2015. As a result, actions like increasing oil efficiency need to be taken to deal with challenges of the increasing demand for oil.
Resumo:
Ethanol-gasoline fuel blends are increasingly being used in spark ignition (SI) engines due to continued growth in renewable fuels as part of a growing renewable portfolio standard (RPS). This leads to the need for a simple and accurate ethanol-gasoline blends combustion model that is applicable to one-dimensional engine simulation. A parametric combustion model has been developed, integrated into an engine simulation tool, and validated using SI engine experimental data. The parametric combustion model was built inside a user compound in GT-Power. In this model, selected burn durations were computed using correlations as functions of physically based non-dimensional groups that have been developed using the experimental engine database over a wide range of ethanol-gasoline blends, engine geometries, and operating conditions. A coefficient of variance (COV) of gross indicated mean effective pressure (IMEP) correlation was also added to the parametric combustion model. This correlation enables the cycle combustion variation modeling as a function of engine geometry and operating conditions. The computed burn durations were then used to fit single and double Wiebe functions. The single-Wiebe parametric combustion compound used the least squares method to compute the single-Wiebe parameters, while the double-Wiebe parametric combustion compound used an analytical solution to compute the double-Wiebe parameters. These compounds were then integrated into the engine model in GT-Power through the multi-Wiebe combustion template in which the values of Wiebe parameters (single-Wiebe or double-Wiebe) were sensed via RLT-dependence. The parametric combustion models were validated by overlaying the simulated pressure trace from GT-Power on to experimentally measured pressure traces. A thermodynamic engine model was also developed to study the effect of fuel blends, engine geometries and operating conditions on both the burn durations and COV of gross IMEP simulation results.
Resumo:
Following the rapid growth of China's economy, energy consumption, especially electricity consumption of China, has made a huge increase in the past 30 years. Since China has been using coal as the major energy source to produce electricity during these years, environmental problems have become more and more serious. The research question for this paper is: "Can China use alternative energies instead of coal to produce more electricity in 2030?" Hydro power, nuclear power, natural gas, wind power and solar power are considered as the possible and most popular alternative energies for the current situation of China. To answer the research question above, there are two things to know: How much is the total electricity consumption in China by 2030? And how much electricity can the alternative energies provide in China by 2030? For a more reliable forecast, an econometric model using the Ordinary Least Squares Method is established on this paper to predict the total electricity consumption by 2030. The predicted electricity coming from alternative energy sources by 2030 in China can be calculated from the existing literature. The research results of this paper are analyzed under a reference scenario and a max tech scenario. In the reference scenario, the combination of the alternative energies can provide 47.71% of the total electricity consumption by 2030. In the max tech scenario, it provides 57.96% of the total electricity consumption by 2030. These results are important not only because they indicate the government's long term goal is reachable, but also implies that the natural environment of China could have an inspiring future.
Resumo:
Numerous models have been formulated to describe development. Generally, these start off with a state of not-yet development or nondevelopment, and then go on to contrast this with a second state: some kind of plan or blueprint for development. As a result, the process of development is equated with a series of completed stages. Like having to climb the rungs of a ladder, one moves up and up in order to become more and more developed. The associated catching-up processes are then frequently described with phase models. ln contrast to such goal-directed perspectives on development with their links to modernization theory, social development pursues an alternative approach focusing on the empowerment and autonomy of actors, and also taking account of the structural obstacles that confront them as they shape their daily lives in the sense of learning to develop their selves. This means that development is always conceived within a twin framework of self- and other-development. Social development represents a holistic approach that is non-static and process-oriented.
Resumo:
In this paper, the well-known method of frames approach to the signal decomposition problem is reformulated as a certain bilevel goal-attainment linear least squares problem. As a consequence, a numerically robust variant of the method, named approximating method of frames, is proposed on the basis of a certain minimal Euclidean norm approximating splitting pseudo-iteration-wise method.
Resumo:
The purpose of this research is to examine the relative profitability of the firm within the nursing facility industry in Texas. An examination is made of the variables expected to affect profitability and of importance to the design and implementation of regulatory policy. To facilitate this inquiry, specific questions addressed are: (1) Do differences in ownership form affect profitability (defined as operating income before fixed costs)? (2) What impact does regional location have on profitability? (3) Do patient case-mix and access to care by Medicaid patients differ between proprietary and non-profit firms and facilities located in urban versus rural regions, and what association exists between these variables and profitability? (4) Are economies of scale present in the nursing home industry? (5) Do nursing facilities operate in a competitive output market characterized by the inability of a single firm to exhibit influence over market price?^ Prior studies have principally employed a cost function to assess efficiency differences between classifications of nursing facilities. The inherent weakness in this approach is that it only considers technical efficiency. Not both technical and price efficiency which are the two components of overall economic efficiency. One firm is more technically efficient compared to another if it is able to produce a given quantity of output at the least possible costs. Price efficiency means that scarce resources are being directed towards their most valued use. Assuming similar prices in both input and output markets, differences in overall economic efficiency between firm classes are assessed through profitability, hence a profit function.^ Using the framework of the profit function, data from 1990 Medicaid Costs Reports for Texas, and the analytic technique of Ordinary Least Squares Regression, the findings of the study indicated (1) similar profitability between nursing facilities organized as for-profit versus non-profit and located in urban versus rural regions, (2) an inverse association between both payor-mix and patient case-mix with profitability, (3) strong evidence for the presence of scale economies, and (4) existence of a competitive market structure. The paper concludes with implications regarding reimbursement methodology and construction moratorium policies in Texas. ^