960 resultados para DYNAMIC PORTFOLIO SELECTION


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new approach to improving the effectiveness of autonomous systems that deal with dynamic environments. The basis of the approach is to find repeating patterns of behavior in the dynamic elements of the system, and then to use predictions of the repeating elements to better plan goal directed behavior. It is a layered approach involving classifying, modeling, predicting and exploiting. Classifying involves using observations to place the moving elements into previously defined classes. Modeling involves recording features of the behavior on a coarse grained grid. Exploitation is achieved by integrating predictions from the model into the behavior selection module to improve the utility of the robot's actions. This is in contrast to typical approaches that use the model to select between different strategies or plays. Three methods of adaptation to the dynamic features of the environment are explored. The effectiveness of each method is determined using statistical tests over a number of repeated experiments. The work is presented in the context of predicting opponent behavior in the highly dynamic and multi-agent robot soccer domain (RoboCup).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of tumour motion during radiation therapy delivery have been widely investigated. Motion effects have become increasingly important with the introduction of dynamic radiotherapy delivery modalities such as enhanced dynamic wedges (EDWs) and intensity modulated radiation therapy (IMRT) where a dynamically collimated radiation beam is delivered to the moving target, resulting in dose blurring and interplay effects which are a consequence of the combined tumor and beam motion. Prior to this work, reported studies on the EDW based interplay effects have been restricted to the use of experimental methods for assessing single-field non-fractionated treatments. In this work, the interplay effects have been investigated for EDW treatments. Single and multiple field treatments have been studied using experimental and Monte Carlo (MC) methods. Initially this work experimentally studies interplay effects for single-field non-fractionated EDW treatments, using radiation dosimetry systems placed on a sinusoidaly moving platform. A number of wedge angles (60º, 45º and 15º), field sizes (20 × 20, 10 × 10 and 5 × 5 cm2), amplitudes (10-40 mm in step of 10 mm) and periods (2 s, 3 s, 4.5 s and 6 s) of tumor motion are analysed (using gamma analysis) for parallel and perpendicular motions (where the tumor and jaw motions are either parallel or perpendicular to each other). For parallel motion it was found that both the amplitude and period of tumor motion affect the interplay, this becomes more prominent where the collimator tumor speeds become identical. For perpendicular motion the amplitude of tumor motion is the dominant factor where as varying the period of tumor motion has no observable effect on the dose distribution. The wedge angle results suggest that the use of a large wedge angle generates greater dose variation for both parallel and perpendicular motions. The use of small field size with a large tumor motion results in the loss of wedged dose distribution for both parallel and perpendicular motion. From these single field measurements a motion amplitude and period have been identified which show the poorest agreement between the target motion and dynamic delivery and these are used as the „worst case motion parameters.. The experimental work is then extended to multiple-field fractionated treatments. Here a number of pre-existing, multiple–field, wedged lung plans are delivered to the radiation dosimetry systems, employing the worst case motion parameters. Moreover a four field EDW lung plan (using a 4D CT data set) is delivered to the IMRT quality control phantom with dummy tumor insert over four fractions using the worst case parameters i.e. 40 mm amplitude and 6 s period values. The analysis of the film doses using gamma analysis at 3%-3mm indicate the non averaging of the interplay effects for this particular study with a gamma pass rate of 49%. To enable Monte Carlo modelling of the problem, the DYNJAWS component module (CM) of the BEAMnrc user code is validated and automated. DYNJAWS has been recently introduced to model the dynamic wedges. DYNJAWS is therefore commissioned for 6 MV and 10 MV photon energies. It is shown that this CM can accurately model the EDWs for a number of wedge angles and field sizes. The dynamic and step and shoot modes of the CM are compared for their accuracy in modelling the EDW. It is shown that dynamic mode is more accurate. An automation of the DYNJAWS specific input file has been carried out. This file specifies the probability of selection of a subfield and the respective jaw coordinates. This automation simplifies the generation of the BEAMnrc input files for DYNJAWS. The DYNJAWS commissioned model is then used to study multiple field EDW treatments using MC methods. The 4D CT data of an IMRT phantom with the dummy tumor is used to produce a set of Monte Carlo simulation phantoms, onto which the delivery of single field and multiple field EDW treatments is simulated. A number of static and motion multiple field EDW plans have been simulated. The comparison of dose volume histograms (DVHs) and gamma volume histograms (GVHs) for four field EDW treatments (where the collimator and patient motion is in the same direction) using small (15º) and large wedge angles (60º) indicates a greater mismatch between the static and motion cases for the large wedge angle. Finally, to use gel dosimetry as a validation tool, a new technique called the „zero-scan method. is developed for reading the gel dosimeters with x-ray computed tomography (CT). It has been shown that multiple scans of a gel dosimeter (in this case 360 scans) can be used to reconstruct a zero scan image. This zero scan image has a similar precision to an image obtained by averaging the CT images, without the additional dose delivered by the CT scans. In this investigation the interplay effects have been studied for single and multiple field fractionated EDW treatments using experimental and Monte Carlo methods. For using the Monte Carlo methods the DYNJAWS component module of the BEAMnrc code has been validated and automated and further used to study the interplay for multiple field EDW treatments. Zero-scan method, a new gel dosimetry readout technique has been developed for reading the gel images using x-ray CT without losing the precision and accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The question "what causes variety in organisational routines" is of considerable interest to organisational scholars, and one to which this thesis seeks to answer. To this end an evolutionary theory of change is advanced which holds that the dynamics of selection, adaptation and retention explain the creation of variety in organisational routines. A longitudinal, multi-level, multi-case analysis is undertaken in this thesis, using multiple data collection strategies. In each case, different types of variety were identified, according to a typology, together with how selection, adaptation and retention contribute to variety in a positive or negative sense. Methodologically, the thesis makes a contribution to our understanding of variety, as certain types of variety only become evident when examined by specific types of research design. The research also makes a theoretical contribution by explaining how selection, adaptation and retention individually and collectively contribute to variety in organisational routines. Moreover, showing that routines could be stable, diverse, adaptive and dynamic at the same time; is a significant, and novel, theoretical contribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Target date retirement funds have gained favor with retirement plan investors in recent years. Typically, these funds initially have a high allocation to stocks but move towards less volatile assets, such as bonds and cash, as the target retirement date approaches. Empirical research has generally found that a switch to low-risk assets prior to retirement can reduce the risk of confronting the most extreme negative outcomes. This article questions the rationale for lifecycle switching based solely on age or target retirement date as is the prevalent practice among target date funds. The authors argue that a dynamic switching strategy, which takes into consideration achieved investment returns, will produce superior returns for most investors compared to conventional lifecycle switching. In this article, the authors put forward a dynamic lifecycle switching strategy that is conditional on the attainment of the plan member's wealth accumulation objective at every stage of switching.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our task is to consider the evolving perspectives around curriculum documented in the Theory Into Practice (TIP) corpus to date. The 50 years in question, 1962–2012, account for approximately half the history of mass institutionalized schooling. Over this time, the upper age of compulsory schooling has crept up, stretching the school curriculum's reach, purpose, and clientele. These years also span remarkable changes in the social fabric, challenging deep senses of the nature and shelf-life of knowledge, whose knowledge counts, what science can and cannot deliver, and the very purpose of education. The school curriculum is a key social site where these challenges have to be addressed in a very practical sense, through a design on the future implemented within the resources and politics of the present. The task's metaphor of ‘evolution’ may invoke a sense of gradual cumulative improvement, but equally connotes mutation, hybridization, extinction, survival of the fittest, and environmental pressures. Viewed in this way, curriculum theory and practice cannot be isolated and studied in laboratory conditions—there is nothing natural, neutral, or self-evident about what knowledge gets selected into the curriculum. Rather, the process of selection unfolds as a series of messy, politically contaminated, lived experiments; thus curriculum studies require field work in dynamic open systems. We subscribe to Raymond Williams' approach to social change, which he argues is not absolute and abrupt, one set of ideas neatly replacing the other. For Williams, newly emergent ideas have to compete against the dominant mindset and residual ideas “still active in the cultural process'” (Williams, 1977, p. 122). This means ongoing debates. For these reasons, we join Schubert (1992) in advocating “continuous reconceptualising of the flow of experience” (p. 238) by both researchers and practitioners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evolutionary computation is an effective tool for solving optimization problems. However, its significant computational demand has limited its real-time and on-line applications, especially in embedded systems with limited computing resources, e.g., mobile robots. Heuristic methods such as the genetic algorithm (GA) based approaches have been investigated for robot path planning in dynamic environments. However, research on the simulated annealing (SA) algorithm, another popular evolutionary computation algorithm, for dynamic path planning is still limited mainly due to its high computational demand. An enhanced SA approach, which integrates two additional mathematical operators and initial path selection heuristics into the standard SA, is developed in this work for robot path planning in dynamic environments with both static and dynamic obstacles. It improves the computing performance of the standard SA significantly while giving an optimal or near-optimal robot path solution, making its real-time and on-line applications possible. Using the classic and deterministic Dijkstra algorithm as a benchmark, comprehensive case studies are carried out to demonstrate the performance of the enhanced SA and other SA algorithms in various dynamic path planning scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this paper is to explore the relationship between dynamic capabilities and different types of online innovations. Building on qualitative data from the publishing industry, our analysis revealed that companies that had relatively strong dynamic capabilities in all three areas (sensing, seizing and reconfiguration) seem to produce innovations that combine their existing capabilities on either the market or the technology dimension with new capabilities on the other dimension thus resulting in niche creation and revolutionary type innovations. Correspondingly, companies with a weaker or more one-sided set of dynamic capabilities seem to produce more radical innovations requiring both new market and technological capabilities. The study therefore provides an empirical contribution to the emerging work on dynamic capabilities through its in-depth investigation of the capabilities of the four case firms, and by mapping the patterns between the firm's portfolio of dynamic capabilities and innovation outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the electricity market environment, load-serving entities (LSEs) will inevitably face risks in purchasing electricity because there are a plethora of uncertainties involved. To maximize profits and minimize risks, LSEs need to develop an optimal strategy to reasonably allocate the purchased electricity amount in different electricity markets such as the spot market, bilateral contract market, and options market. Because risks originate from uncertainties, an approach is presented to address the risk evaluation problem by the combined use of the lower partial moment and information entropy (LPME). The lower partial moment is used to measure the amount and probability of the loss, whereas the information entropy is used to represent the uncertainty of the loss. Electricity purchasing is a repeated procedure; therefore, the model presented represents a dynamic strategy. Under the chance-constrained programming framework, the developed optimization model minimizes the risk of the electricity purchasing portfolio in different markets because the actual profit of the LSE concerned is not less than the specified target under a required confidence level. Then, the particle swarm optimization (PSO) algorithm is employed to solve the optimization model. Finally, a sample example is used to illustrate the basic features of the developed model and method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reviews the recent research progress on multi-layer composite structures composed of variety of materials. The utilization of multi-layer composite system is found to be common in metal structures and pavement systems. The layer of composite structure designed to encounter heavy dynamic energy should have sufficient ductility to counteract the intensity of energy. Therefore, the selection of materials and enhancement of interface bonding become crucial and both are discussed in this paper. The failure modes have also been explored in conjunction with stresses at failures and inferred solutions are also revealed. The paper attempts to reveal all technical facts on multi-layer composite structure in a broad field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Security in a mobile communication environment is always a matter for concern, even after deploying many security techniques at device, network, and application levels. The end-to-end security for mobile applications can be made robust by developing dynamic schemes at application level which makes use of the existing security techniques varying in terms of space, time, and attacks complexities. In this paper we present a security techniques selection scheme for mobile transactions, called the Transactions-Based Security Scheme (TBSS). The TBSS uses intelligence to study, and analyzes the security implications of transactions under execution based on certain criterion such as user behaviors, transaction sensitivity levels, and credibility factors computed over the previous transactions by the users, network vulnerability, and device characteristics. The TBSS identifies a suitable level of security techniques from the repository, which consists of symmetric, and asymmetric types of security algorithms arranged in three complexity levels, covering various encryption/decryption techniques, digital signature schemes, andhashing techniques. From this identified level, one of the techniques is deployed randomly. The results shows that, there is a considerable reduction in security cost compared to static schemes, which employ pre-fixed security techniques to secure the transactions data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a concept for a collision avoidance system for ships, which is based on model predictive control. A finite set of alternative control behaviors are generated by varying two parameters: offsets to the guidance course angle commanded to the autopilot and changes to the propulsion command ranging from nominal speed to full reverse. Using simulated predictions of the trajectories of the obstacles and ship, compliance with the Convention on the International Regulations for Preventing Collisions at Sea and collision hazards associated with each of the alternative control behaviors are evaluated on a finite prediction horizon, and the optimal control behavior is selected. Robustness to sensing error, predicted obstacle behavior, and environmental conditions can be ensured by evaluating multiple scenarios for each control behavior. The method is conceptually and computationally simple and yet quite versatile as it can account for the dynamics of the ship, the dynamics of the steering and propulsion system, forces due to wind and ocean current, and any number of obstacles. Simulations show that the method is effective and can manage complex scenarios with multiple dynamic obstacles and uncertainty associated with sensors and predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic supramolecular systems involving a tetratopic palladium(II) acceptor and three different pyridine-and imidazole-based donors have been used for self-selection by a synergistic effect of morphological information and coordination ability of ligands through specific coordination interactions. Three different cages were first synthesized by two-component self-assembly of individual donor and acceptor. When all four components were allowed to interact in a reaction mixture, only one out of three cages was isolated. The preferential binding affinity towards a particular partner was also established by transforming a non-preferred cage into a preferred cage by interaction with the appropriate ligand. Computational studies further supported the fact that coordination interaction of imidazole moiety to Pd-II is enthalpically more preferred compared to pyridine, which drives the selection process. Analysis of crystal packing of both complexes indicated the presence of strong hydrogen bonds between nitrate and water molecules and also H-bonded 3D networks of water. Both complexes exhibit promising proton conductivity (10(-5) to ca. 10(-3) Scm(-1)) at ambient temperature under a relative humidity of circa 98% with low activation energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new C-0 composite plate finite element based on Reddy's third order theory is used for large deformation dynamic analysis of delaminated composite plates. The inter-laminar contact is modeled with an augmented Lagrangian approach. Numerical results show that the widely used ``unconditionally stable'' beta-Newmark method presents instability problems in the transient simulation of delaminated composite plate structures with large deformation. To overcome this instability issue, an energy and momentum conserving composite implicit time integration scheme presented by Bathe and Baig is used. It is found that a proper selection of the penalty parameter is very crucial in the contact simulation. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article discusses problems of modelling the seasonal succession of algal species in lakes and reservoirs, and the adaptive selection of certain groups of algae in response to changes in the inputs and relative concentrations of nutrients and other environmental variables. A new generation of quantitative models is being developed which attempts to translate some important biological properties of species (survival, variation, inheritance, reproductive rates and population growth) into predictions about the survival of the fittest, where ”fitness” is measured or estimated in thermodynamic terms. The concept of ”exergy” and its calculation is explored to examine maximal exergy as a measure of fitness in ecosystems, and its use for calculating changes in species composition by means of structural dynamic models. These models accomodate short-term changes in parameters that affect the adaptive responses (species selection) of algae.