929 resultados para Based structure model
Resumo:
There has been an increasing interest in the use of agent-based simulation and some discussion of the relative merits of this approach as compared to discrete-event simulation. There are differing views on whether an agent-based simulation offers capabilities that discrete-event cannot provide or whether all agent-based applications can at least in theory be undertaken using a discrete-event approach. This paper presents a simple agent-based NetLogo model and corresponding discrete-event versions implemented in the widely used ARENA software. The two versions of the discrete-event model presented use a traditional process flow approach normally adopted in discrete-event simulation software and also an agent-based approach to the model build. In addition a real-time spatial visual display facility is provided using a spreadsheet platform controlled by VBA code embedded within the ARENA model. Initial findings from this investigation are that discrete-event simulation can indeed be used to implement agent-based models and with suitable integration elements such as VBA provide the spatial displays associated with agent-based software.
Resumo:
Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.
Resumo:
The recent years have witnessed increased development of small, autonomous fixed-wing Unmanned Aerial Vehicles (UAVs). In order to unlock widespread applicability of these platforms, they need to be capable of operating under a variety of environmental conditions. Due to their small size, low weight, and low speeds, they require the capability of coping with wind speeds that are approaching or even faster than the nominal airspeed. In this thesis, a nonlinear-geometric guidance strategy is presented, addressing this problem. More broadly, a methodology is proposed for the high-level control of non-holonomic unicycle-like vehicles in the presence of strong flowfields (e.g. winds, underwater currents) which may outreach the maximum vehicle speed. The proposed strategy guarantees convergence to a safe and stable vehicle configuration with respect to the flowfield, while preserving some tracking performance with respect to the target path. As an alternative approach, an algorithm based on Model Predictive Control (MPC) is developed, and a comparison between advantages and disadvantages of both approaches is drawn. Evaluations in simulations and a challenging real-world flight experiment in very windy conditions confirm the feasibility of the proposed guidance approach.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
One of the most disputable matters in the theory of finance has been the theory of capital structure. The seminal contributions of Modigliani and Miller (1958, 1963) gave rise to a multitude of studies and debates. Since the initial spark, the financial literature has offered two competing theories of financing decision: the trade-off theory and the pecking order theory. The trade-off theory suggests that firms have an optimal capital structure balancing the benefits and costs of debt. The pecking order theory approaches the firm capital structure from information asymmetry perspective and assumes a hierarchy of financing, with firms using first internal funds, followed by debt and as a last resort equity. This thesis analyses the trade-off and pecking order theories and their predictions on a panel data consisting 78 Finnish firms listed on the OMX Helsinki stock exchange. Estimations are performed for the period 2003–2012. The data is collected from Datastream system and consists of financial statement data. A number of capital structure characteristics are identified: firm size, profitability, firm growth opportunities, risk, asset tangibility and taxes, speed of adjustment and financial deficit. A regression analysis is used to examine the effects of the firm characteristics on capitals structure. The regression models were formed based on the relevant theories. The general capital structure model is estimated with fixed effects estimator. Additionally, dynamic models play an important role in several areas of corporate finance, but with the combination of fixed effects and lagged dependent variables the model estimation is more complicated. A dynamic partial adjustment model is estimated using Arellano and Bond (1991) first-differencing generalized method of moments, the ordinary least squares and fixed effects estimators. The results for Finnish listed firms show support for the predictions of profitability, firm size and non-debt tax shields. However, no conclusive support for the pecking-order theory is found. However, the effect of pecking order cannot be fully ignored and it is concluded that instead of being substitutes the trade-off and pecking order theory appear to complement each other. For the partial adjustment model the results show that Finnish listed firms adjust towards their target capital structure with a speed of 29% a year using book debt ratio.
Resumo:
Persistent daily congestion has been increasing in recent years, particularly along major corridors during selected periods in the mornings and evenings. On certain segments, these roadways are often at or near capacity. However, a conventional Predefined control strategy did not fit the demands that changed over time, making it necessary to implement the various dynamical lane management strategies discussed in this thesis. Those strategies include hard shoulder running, reversible HOV lanes, dynamic tolls and variable speed limit. A mesoscopic agent-based DTA model is used to simulate different strategies and scenarios. From the analyses, all strategies aim to mitigate congestion in terms of the average speed and average density. The largest improvement can be found in hard shoulder running and reversible HOV lanes while the other two provide more stable traffic. In terms of average speed and travel time, hard shoulder running is the most congested strategy for I-270 to help relieve the traffic pressure.
Resumo:
Purpose. In the present study we examined the relationship between solvent uptake into a model membrane (silicone) with the physical properties of the solvents (e.g., solubility parameter, melting point, molecular weight) and its potential predictability. We then assessed the subsequent topical penetration and retention kinetics of hydrocortisone from various solvents to define whether modifications to either solute diffusivity or partitioning were dominant in increasing permeability through solvent-modified membranes. Methods. Membrane sorption of solvents was determined from weight differences following immersion in individual solvents, corrected for differences in density. Permeability and retention kinetics of H-3-hydrocortisone, applied as saturated solutions in the various solvents, were determined over 48 h in horizontal Franz-type glass diffusion cells. Results. Solvent sorption into the membrane could be related to differences in solubility parameters, MW and hydrogen bonding (r(2) = 0.76). The actual and predicted volume of solvent sorbed into the membrane was also found to be linearly related to Log hydrocortisone flux, with changes in both diffusivity and partitioning of hydrocortisone observed for the different solvent vehicles. Conclusions. A simple structure-based predictive model can be applied to the sorption of solvents into silicone membranes. Changes in solute diffusivity and partitioning appeared to contribute to the increased hydrocortisone flux observed with the various solvent vehicles. The application of this predictive model to the more complex skin membrane remains to be determined.
Resumo:
This trial compared the cost of an integrated home-based care model with traditional inpatient care for acute chronic obstructive pulmonary disease (COPD). 25 patients with acute COPD were randomised to either home or hospital management following request for hospital admission. The acute care at home group costs per separation ($745, CI95% $595-$895, n = 13) were significantly lower (p < 0.01) than the hospital group ($2543, CI95% $1766-$3321, n = 12). There was an improvement in lung function in the hospital-managed group at the Outpatient Department review, decreased anxiety in the Emergency Department in the home-managed group and equal patient satisfaction with care delivery. Acute care at home schemes can substitute for usual hospital care for some patients without adverse effects, and potentially release resources. A funding model that allows adequate resource delivery to the community will be needed if there is a move to devolve acute care to community providers.
Resumo:
The use of a fitted parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can lead to predictive nonuniqueness. The extent of model predictive uncertainty should be investigated if management decisions are to be based on model projections. Using models built for four neighboring watersheds in the Neuse River Basin of North Carolina, the application of the automated parameter optimization software PEST in conjunction with the Hydrologic Simulation Program Fortran (HSPF) is demonstrated. Parameter nonuniqueness is illustrated, and a method is presented for calculating many different sets of parameters, all of which acceptably calibrate a watershed model. A regularization methodology is discussed in which models for similar watersheds can be calibrated simultaneously. Using this method, parameter differences between watershed models can be minimized while maintaining fit between model outputs and field observations. In recognition of the fact that parameter nonuniqueness and predictive uncertainty are inherent to the modeling process, PEST's nonlinear predictive analysis functionality is then used to explore the extent of model predictive uncertainty.
Resumo:
Dissertação de Mestrado, Ciências Económicas e Empresariais, 13 de Dezembro de 2012, Universidade dos Açores.
Resumo:
The definition and programming of distributed applications has become a major research issue due to the increasing availability of (large scale) distributed platforms and the requirements posed by the economical globalization. However, such a task requires a huge effort due to the complexity of the distributed environments: large amount of users may communicate and share information across different authority domains; moreover, the “execution environment” or “computations” are dynamic since the number of users and the computational infrastructure change in time. Grid environments, in particular, promise to be an answer to deal with such complexity, by providing high performance execution support to large amount of users, and resource sharing across different organizations. Nevertheless, programming in Grid environments is still a difficult task. There is a lack of high level programming paradigms and support tools that may guide the application developer and allow reusability of state-of-the-art solutions. Specifically, the main goal of the work presented in this thesis is to contribute to the simplification of the development cycle of applications for Grid environments by bringing structure and flexibility to three stages of that cycle through a commonmodel. The stages are: the design phase, the execution phase, and the reconfiguration phase. The common model is based on the manipulation of patterns through pattern operators, and the division of both patterns and operators into two categories, namely structural and behavioural. Moreover, both structural and behavioural patterns are first class entities at each of the aforesaid stages. At the design phase, patterns can be manipulated like other first class entities such as components. This allows a more structured way to build applications by reusing and composing state-of-the-art patterns. At the execution phase, patterns are units of execution control: it is possible, for example, to start or stop and to resume the execution of a pattern as a single entity. At the reconfiguration phase, patterns can also be manipulated as single entities with the additional advantage that it is possible to perform a structural reconfiguration while keeping some of the behavioural constraints, and vice-versa. For example, it is possible to replace a behavioural pattern, which was applied to some structural pattern, with another behavioural pattern. In this thesis, besides the proposal of the methodology for distributed application development, as sketched above, a definition of a relevant set of pattern operators was made. The methodology and the expressivity of the pattern operators were assessed through the development of several representative distributed applications. To support this validation, a prototype was designed and implemented, encompassing some relevant patterns and a significant part of the patterns operators defined. This prototype was based in the Triana environment; Triana supports the development and deployment of distributed applications in the Grid through a dataflow-based programming model. Additionally, this thesis also presents the analysis of a mapping of some operators for execution control onto the Distributed Resource Management Application API (DRMAA). This assessment confirmed the suitability of the proposed model, as well as the generality and flexibility of the defined pattern operators
Resumo:
The IEEE 802.15.4 protocol has the ability to support time-sensitive Wireless Sensor Network (WSN) applications due to the Guaranteed Time Slot (GTS) Medium Access Control mechanism. Recently, several analytical and simulation models of the IEEE 802.15.4 protocol have been proposed. Nevertheless, currently available simulation models for this protocol are both inaccurate and incomplete, and in particular they do not support the GTS mechanism. In this paper, we propose an accurate OPNET simulation model, with focus on the implementation of the GTS mechanism. The motivation that has driven this work is the validation of the Network Calculus based analytical model of the GTS mechanism that has been previously proposed and to compare the performance evaluation of the protocol as given by the two alternative approaches. Therefore, in this paper we contribute an accurate OPNET model for the IEEE 802.15.4 protocol. Additionally, and probably more importantly, based on the simulation model we propose a novel methodology to tune the protocol parameters such that a better performance of the protocol can be guaranteed, both concerning maximizing the throughput of the allocated GTS as well as concerning minimizing frame delay.
Resumo:
Mestrado em Engenharia Civil – Ramo Estruturas
Resumo:
It is imperative to accept that failures can and will occur, even in meticulously designed distributed systems, and design proper measures to counter those failures. Passive replication minimises resource consumption by only activating redundant replicas in case of failures, as typically providing and applying state updates is less resource demanding than requesting execution. However, most existing solutions for passive fault tolerance are usually designed and configured at design time, explicitly and statically identifying the most critical components and their number of replicas, lacking the needed flexibility to handle the runtime dynamics of distributed component-based embedded systems. This paper proposes a cost-effective adaptive fault tolerance solution with a significant lower overhead compared to a strict active redundancy-based approach, achieving a high error coverage with the minimum amount of redundancy. The activation of passive replicas is coordinated through a feedback-based coordination model that reduces the complexity of the needed interactions among components until a new collective global service solution is determined, improving the overall maintainability and robustness of the system.
Resumo:
RESUMO - Contexto Os indivíduos, tal como as instituições, não são imunes a incentivos. No entanto, enquanto os modelos de incentivos das instituições têm sido alvo de diferentes evoluções, o mesmo não se verificou ao nível dos profissionais. Esta situação não se figura compatível com a complexidade de gestão de recursos humanos, devendo ser obviada para potenciar o alinhamento entre os interesses institucionais e os dos próprios profissionais. Objectivos Estudar a atribuição de incentivos a profissionais de saúde no contexto de organizações com integração vertical de cuidados. Metodologia A metodologia adoptada compreendeu três fases. Numa primeira procedeu-se à revisão sistemática de literatura relativa à: (1) construção de modelos de incentivo a profissionais em diferentes sistemas de saúde e tipo de prestadores; e (2) identificação de medidas de custo-efectividade comprovada. Tendo por base esta evidência, a par de documentação oficial ao nível do modelo de financiamento das ULS, procedeu-se, numa segunda fase, à construção de um modelo de incentivo base com recurso à ferramenta Microsoft Excel. Por último, numa terceira etapa, procedeu-se à adaptação do modelo base construído na etapa transacta tendo por base informação obtida mediante a realização de um estudo retrospectivo in loco na ULS do Baixo Alentejo (ULSBA). Em adição, procedeu-se à estimativa do impacto na perspectiva da ULS e dos profissionais para o cenário base e diversas análises de sensibilidade. Resultados No que respeita à estrutura, o modelo base de incentivos a profissionais apresenta 44 indicadores, distribuídos por cinco dimensões de análise, sendo que 28 indicadores (63,6%) são de processo e 14 (31,8%) de resultado. Relativamente às dimensões em análise, verifica-se uma predominância de indicadores ao nível da dimensão eficiência e qualidade assistencial, totalizando 35 (i.e. 79,5% dos 44 indicadores). No que respeita ao destinatário, 14 indicadores (31,8%) apresentam uma visão holística da ULS, 17 (38,6%) encontram-se adstritos unicamente aos cuidados primários e os remanescentes 13 (29,5%) aos cuidados hospitalares. Cerca de 85% dos actuais incentivos da ULSBA decorre da unidade de pagamento salarial secundada pelo pagamento de suplementos (12%). Não obstante, o estudo retrospectivo da ULSBA confirmou o cenário expectável de ausência de um modelo de incentivo homogéneos e transversal à ULS, transparecendo importantes assimetrias entre diferentes unidades prestadoras e/ou profissionais de saúde. De forma relevante importa apontar a insuficiência de incentivos capitacionais (ao contrário do que sucede com o modelo de incentivo da própria ULSBA) ou adstritos a índices de desempenho. Tendo em consideração o modelo de incentivo concebido e adaptado à realidade da ULSBA, a par do plano de implementação, estima-se que o modelo de incentivos gere: (1) poupanças na perspectiva da ULS (entre 2,5% a 3,5% do orçamento global da ULSBA); e (2) um incremento de remuneração ao nível dos profissionais (entre 5% a 15% do salario base). O supracitado – aparentemente contraditório - decorre da aposta em medidas de custo-efectividade contrastada e um alinhamento entre o modelo proposto e o vigente para o próprio financiamento da unidade, apostando numa clara estratégia de ganhos mútuos. As análises de sensibilidade realizadas permitem conferir a solidez e robustez do modelo a significativas variações em parâmetros chave.