983 resultados para Quasi-analytical algorithms
Resumo:
The effectiveness of the Anisotropic Analytical Algorithm (AAA) implemented in the Eclipse treatment planning system (TPS) was evaluated using theRadiologicalPhysicsCenteranthropomorphic lung phantom using both flattened and flattening-filter-free high energy beams. Radiation treatment plans were developed following the Radiation Therapy Oncology Group and theRadiologicalPhysicsCenterguidelines for lung treatment using Stereotactic Radiation Body Therapy. The tumor was covered such that at least 95% of Planning Target Volume (PTV) received 100% of the prescribed dose while ensuring that normal tissue constraints were followed as well. Calculated doses were exported from the Eclipse TPS and compared with the experimental data as measured using thermoluminescence detectors (TLD) and radiochromic films that were placed inside the phantom. The results demonstrate that the AAA superposition-convolution algorithm is able to calculate SBRT treatment plans with all clinically used photon beams in the range from 6 MV to 18 MV. The measured dose distribution showed a good agreement with the calculated distribution using clinically acceptable criteria of ±5% dose or 3mm distance to agreement. These results show that in a heterogeneous environment a 3D pencil beam superposition-convolution algorithms with Monte Carlo pre-calculated scatter kernels, such as AAA, are able to reliably calculate dose, accounting for increased lateral scattering due to the loss of electronic equilibrium in low density medium. The data for high energy plans (15 MV and 18 MV) showed very good tumor coverage in contrast to findings by other investigators for less sophisticated dose calculation algorithms, which demonstrated less than expected tumor doses and generally worse tumor coverage for high energy plans compared to 6MV plans. This demonstrates that the modern superposition-convolution AAA algorithm is a significant improvement over previous algorithms and is able to calculate doses accurately for SBRT treatment plans in the highly heterogeneous environment of the thorax for both lower (≤12 MV) and higher (greater than 12 MV) beam energies.
Resumo:
An analytical method for evaluating the uncertainty of the performance of active antenna arrays in the whole spatial spectrum is presented. Since array processing algorithms based on spatial reference are widely used to track moving targets, it is essential to be aware of the impact of the uncertainty sources on the antenna response. Furthermore, the estimation of the direction of arrival (DOA) depends on the array uncertainty. The aim of the uncertainties analysis is to provide an exhaustive characterization of the behavior of the active antenna array associated with its main uncertainty sources. The result of this analysis helps to select the proper calibration technique to be implemented. An illustrative example for a triangular antenna array used for satellite tracking is presented showing the suitability of the proposed method to carry out an efficient characterization of an active antenna array.
Resumo:
Orthogonal frequency division multiplexing (OFDM) is becoming a fundamental technology in future generation wireless communications. Call admission control is an effective mechanism to guarantee resilient, efficient, and quality-of-service (QoS) services in wireless mobile networks. In this paper, we present several call admission control algorithms for OFDM-based wireless multiservice networks. Call connection requests are differentiated into narrow-band calls and wide-band calls. For either class of calls, the traffic process is characterized as batch arrival since each call may request multiple subcarriers to satisfy its QoS requirement. The batch size is a random variable following a probability mass function (PMF) with realistically maximum value. In addition, the service times for wide-band and narrow-band calls are different. Following this, we perform a tele-traffic queueing analysis for OFDM-based wireless multiservice networks. The formulae for the significant performance metrics call blocking probability and bandwidth utilization are developed. Numerical investigations are presented to demonstrate the interaction between key parameters and performance metrics. The performance tradeoff among different call admission control algorithms is discussed. Moreover, the analytical model has been validated by simulation. The methodology as well as the result provides an efficient tool for planning next-generation OFDM-based broadband wireless access systems.
Resumo:
In this paper, we consider analytical and numerical solutions to the Dirichlet boundary-value problem for the biharmonic partial differential equation on a disc of finite radius in the plane. The physical interpretation of these solutions is that of the harmonic oscillations of a thin, clamped plate. For the linear, fourth-order, biharmonic partial differential equation in the plane, it is well known that the solution method of separation in polar coordinates is not possible, in general. However, in this paper, for circular domains in the plane, it is shown that a method, here called quasi-separation of variables, does lead to solutions of the partial differential equation. These solutions are products of solutions of two ordinary linear differential equations: a fourth-order radial equation and a second-order angular differential equation. To be expected, without complete separation of the polar variables, there is some restriction on the range of these solutions in comparison with the corresponding separated solutions of the second-order harmonic differential equation in the plane. Notwithstanding these restrictions, the quasi-separation method leads to solutions of the Dirichlet boundary-value problem on a disc with centre at the origin, with boundary conditions determined by the solution and its inward drawn normal taking the value 0 on the edge of the disc. One significant feature for these biharmonic boundary-value problems, in general, follows from the form of the biharmonic differential expression when represented in polar coordinates. In this form, the differential expression has a singularity at the origin, in the radial variable. This singularity translates to a singularity at the origin of the fourth-order radial separated equation; this singularity necessitates the application of a third boundary condition in order to determine a self-adjoint solution to the Dirichlet boundary-value problem. The penultimate section of the paper reports on numerical solutions to the Dirichlet boundary-value problem; these results are also presented graphically. Two specific cases are studied in detail and numerical values of the eigenvalues are compared with the results obtained in earlier studies.
Resumo:
The operation of technical processes requires increasingly advanced supervision and fault diagnostics to improve reliability and safety. This paper gives an introduction to the field of fault detection and diagnostics and has short methods classification. Growth of complexity and functional importance of inertial navigation systems leads to high losses at the equipment refusals. The paper is devoted to the INS diagnostics system development, allowing identifying the cause of malfunction. The practical realization of this system concerns a software package, performing a set of multidimensional information analysis. The project consists of three parts: subsystem for analyzing, subsystem for data collection and universal interface for open architecture realization. For a diagnostics improving in small analyzing samples new approaches based on pattern recognition algorithms voting and taking into account correlations between target and input parameters will be applied. The system now is at the development stage.
Resumo:
We present quasi-Monte Carlo analogs of Monte Carlo methods for some linear algebra problems: solving systems of linear equations, computing extreme eigenvalues, and matrix inversion. Reformulating the problems as solving integral equations with a special kernels and domains permits us to analyze the quasi-Monte Carlo methods with bounds from numerical integration. Standard Monte Carlo methods for integration provide a convergence rate of O(N^(−1/2)) using N samples. Quasi-Monte Carlo methods use quasirandom sequences with the resulting convergence rate for numerical integration as good as O((logN)^k)N^(−1)). We have shown theoretically and through numerical tests that the use of quasirandom sequences improves both the magnitude of the error and the convergence rate of the considered Monte Carlo methods. We also analyze the complexity of considered quasi-Monte Carlo algorithms and compare them to the complexity of the analogous Monte Carlo and deterministic algorithms.
Resumo:
We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier–Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager–Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherWe adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.
Resumo:
Az intertemporális döntések fontos szerepet játszanak a közgazdasági modellezésben, és azt írják le, hogy milyen átváltást alkalmazunk két különböző időpont között. A közgazdasági modellezésben az exponenciális diszkontálás a legelterjedtebb, annak ellenére, hogy az empirikus vizsgálatok alapján gyenge a magyarázó ereje. A gazdaságpszichológiában elterjedt általánosított hiperbolikus diszkontálás viszont nagyon nehezen alkalmazható közgazdasági modellezési célra. Így tudott gyorsan elterjedni a kvázi-hiperbolikus diszkontálási modell, amelyik úgy ragadja meg a főbb pszichológiai jelenségeket, hogy kezelhető marad a modellezés során. A cikkben azt állítjuk, hogy hibás az a megközelítés, hogy hosszú távú döntések esetén, főleg sorozatok esetén helyettesíthető a két hiperbolikus diszkontálás egymással. Így a hosszú távú kérdéseknél érdemes felülvizsgálni a kvázi-hiperbolikus diszkontálással kapott eredményeket, ha azok az általánosított hiperbolikus diszkontálási modellel való helyettesíthetőséget feltételezték. ____ Intertemporal choice is one of the crucial questions in economic modeling and it describes decisions which require trade-offs among outcomes occurring in different points in time. In economic modeling the exponential discounting is the most well known, however it has weak validity in empirical studies. Although according to psychologists generalized hyperbolic discounting has the strongest descriptive validity it is very complex and hard to use in economic models. In response to this challenge quasi-hyperbolic discounting was proposed. It has the most important properties of generalized hyperbolic discounting while tractability remains in analytical modeling. Therefore it is common to substitute generalized hyperbolic discounting with quasi-hyperbolic discounting. This paper argues that the substitution of these two models leads to different conclusions in long term decisions especially in the case of series; hence all the models that use quasi-hyperbolic discounting for long term decisions should be revised if they states that generalized hyperbolic discounting model would have the same conclusion.
Resumo:
This research investigates a new structural system utilising modular construction. Five-sided boxes are cast on-site and stacked together to form a building. An analytical model was created of a typical building in each of two different analysis programs utilising the finite element method (Robot Millennium and ETABS). The pros and cons of both Robot Millennium and ETABS are listed at several key stages in the development of an analytical model utilising this structural system. Robot Millennium was initially utilised but created an analytical model too large to be successfully run. The computation requirements were too large for conventional computers. Therefore Robot Millennium was abandoned in favour of ETABS, whose more simplistic algorithms and assumptions permitted running this large computation model. Tips are provided as well as pitfalls signalled throughout the process of modelling such complex buildings of this type. ^ The building under high seismic loading required a new horizontal shear mechanism. This dissertation has proposed to create a secondary floor that ties to the modular box through the use of gunwales, and roughened surfaces with epoxy coatings. In addition, vertical connections necessitated a new type of shear wall. These shear walls consisted of waffled external walls tied through both reinforcement and a secondary concrete pour. ^ This structural system has generated a new building which was found to be very rigid compared to a conventional structure. The proposed modular building exhibited a period of 1.27 seconds, which is about one-fifth of a conventional building. The maximum lateral drift occurs under seismic loading with a magnitude of 6.14 inches which is one-quarter of a conventional building's drift. The deflected shape and pattern of the interstorey drifts are consistent with those of a coupled shear wall building. In conclusion, the computer analysis indicate that this new structure exceeds current code requirements for both hurricane winds and high seismic loads, and concomitantly provides a shortened construction time with reduced funding. ^
Resumo:
The performance of building envelopes and roofing systems significantly depends on accurate knowledge of wind loads and the response of envelope components under realistic wind conditions. Wind tunnel testing is a well-established practice to determine wind loads on structures. For small structures much larger model scales are needed than for large structures, to maintain modeling accuracy and minimize Reynolds number effects. In these circumstances the ability to obtain a large enough turbulence integral scale is usually compromised by the limited dimensions of the wind tunnel meaning that it is not possible to simulate the low frequency end of the turbulence spectrum. Such flows are called flows with Partial Turbulence Simulation. In this dissertation, the test procedure and scaling requirements for tests in partial turbulence simulation are discussed. A theoretical method is proposed for including the effects of low-frequency turbulences in the post-test analysis. In this theory the turbulence spectrum is divided into two distinct statistical processes, one at high frequencies which can be simulated in the wind tunnel, and one at low frequencies which can be treated in a quasi-steady manner. The joint probability of load resulting from the two processes is derived from which full-scale equivalent peak pressure coefficients can be obtained. The efficacy of the method is proved by comparing predicted data derived from tests on large-scale models of the Silsoe Cube and Texas-Tech University buildings in Wall of Wind facility at Florida International University with the available full-scale data. For multi-layer building envelopes such as rain-screen walls, roof pavers, and vented energy efficient walls not only peak wind loads but also their spatial gradients are important. Wind permeable roof claddings like roof pavers are not well dealt with in many existing building codes and standards. Large-scale experiments were carried out to investigate the wind loading on concrete pavers including wind blow-off tests and pressure measurements. Simplified guidelines were developed for design of loose-laid roof pavers against wind uplift. The guidelines are formatted so that use can be made of the existing information in codes and standards such as ASCE 7-10 on pressure coefficients on components and cladding.
Resumo:
The aim of this work is to present a methodology to develop cost-effective thermal management solutions for microelectronic devices, capable of removing maximum amount of heat and delivering maximally uniform temperature distributions. The topological and geometrical characteristics of multiple-story three-dimensional branching networks of microchannels were developed using multi-objective optimization. A conjugate heat transfer analysis software package and an automatic 3D microchannel network generator were developed and coupled with a modified version of a particle-swarm optimization algorithm with a goal of creating a design tool for 3D networks of optimized coolant flow passages. Numerical algorithms in the conjugate heat transfer solution package include a quasi-ID thermo-fluid solver and a steady heat diffusion solver, which were validated against results from high-fidelity Navier-Stokes equations solver and analytical solutions for basic fluid dynamics test cases. Pareto-optimal solutions demonstrate that thermal loads of up to 500 W/cm2 can be managed with 3D microchannel networks, with pumping power requirements up to 50% lower with respect to currently used high-performance cooling technologies.
Resumo:
Dynamics of biomolecules over various spatial and time scales are essential for biological functions such as molecular recognition, catalysis and signaling. However, reconstruction of biomolecular dynamics from experimental observables requires the determination of a conformational probability distribution. Unfortunately, these distributions cannot be fully constrained by the limited information from experiments, making the problem an ill-posed one in the terminology of Hadamard. The ill-posed nature of the problem comes from the fact that it has no unique solution. Multiple or even an infinite number of solutions may exist. To avoid the ill-posed nature, the problem needs to be regularized by making assumptions, which inevitably introduce biases into the result.
Here, I present two continuous probability density function approaches to solve an important inverse problem called the RDC trigonometric moment problem. By focusing on interdomain orientations we reduced the problem to determination of a distribution on the 3D rotational space from residual dipolar couplings (RDCs). We derived an analytical equation that relates alignment tensors of adjacent domains, which serves as the foundation of the two methods. In the first approach, the ill-posed nature of the problem was avoided by introducing a continuous distribution model, which enjoys a smoothness assumption. To find the optimal solution for the distribution, we also designed an efficient branch-and-bound algorithm that exploits the mathematical structure of the analytical solutions. The algorithm is guaranteed to find the distribution that best satisfies the analytical relationship. We observed good performance of the method when tested under various levels of experimental noise and when applied to two protein systems. The second approach avoids the use of any model by employing maximum entropy principles. This 'model-free' approach delivers the least biased result which presents our state of knowledge. In this approach, the solution is an exponential function of Lagrange multipliers. To determine the multipliers, a convex objective function is constructed. Consequently, the maximum entropy solution can be found easily by gradient descent methods. Both algorithms can be applied to biomolecular RDC data in general, including data from RNA and DNA molecules.
Resumo:
In the deregulated Power markets it is necessary to have a appropriate Transmission Pricing methodology that also takes into account “Congestion and Reliability”, in order to ensure an economically viable, equitable, and congestion free power transfer capability, with high reliability and security. This thesis presents results of research conducted on the development of a Decision Making Framework (DMF) of concepts and data analytic and modelling methods for the Reliability benefits Reflective Optimal “cost evaluation for the calculation of Transmission Cost” for composite power systems, using probabilistic methods. The methodology within the DMF devised and reported in this thesis, utilises a full AC Newton-Raphson load flow and a Monte-Carlo approach to determine, Reliability Indices which are then used for the proposed Meta-Analytical Probabilistic Approach (MAPA) for the evaluation and calculation of the Reliability benefit Reflective Optimal Transmission Cost (ROTC), of a transmission system. This DMF includes methods for transmission line embedded cost allocation among transmission transactions, accounting for line capacity-use as well as congestion costing that can be used for pricing using application of Power Transfer Distribution Factor (PTDF) as well as Bialek’s method to determine a methodology which consists of a series of methods and procedures as explained in detail in the thesis for the proposed MAPA for ROTC. The MAPA utilises the Bus Data, Generator Data, Line Data, Reliability Data and Customer Damage Function (CDF) Data for the evaluation of Congestion, Transmission and Reliability costing studies using proposed application of PTDF and other established/proven methods which are then compared, analysed and selected according to the area/state requirements and then integrated to develop ROTC. Case studies involving standard 7-Bus, IEEE 30-Bus and 146-Bus Indian utility test systems are conducted and reported throughout in the relevant sections of the dissertation. There are close correlation between results obtained through proposed application of PTDF method with the Bialek’s and different MW-Mile methods. The novel contributions of this research work are: firstly the application of PTDF method developed for determination of Transmission and Congestion costing, which are further compared with other proved methods. The viability of developed method is explained in the methodology, discussion and conclusion chapters. Secondly the development of comprehensive DMF which helps the decision makers to analyse and decide the selection of a costing approaches according to their requirements. As in the DMF all the costing approaches have been integrated to achieve ROTC. Thirdly the composite methodology for calculating ROTC has been formed into suits of algorithms and MATLAB programs for each part of the DMF, which are further described in the methodology section. Finally the dissertation concludes with suggestions for Future work.
Resumo:
This thesis studies mobile robotic manipulators, where one or more robot manipulator arms are integrated with a mobile robotic base. The base could be a wheeled or tracked vehicle, or it might be a multi-limbed locomotor. As robots are increasingly deployed in complex and unstructured environments, the need for mobile manipulation increases. Mobile robotic assistants have the potential to revolutionize human lives in a large variety of settings including home, industrial and outdoor environments.
Mobile Manipulation is the use or study of such mobile robots as they interact with physical objects in their environment. As compared to fixed base manipulators, mobile manipulators can take advantage of the base mechanism’s added degrees of freedom in the task planning and execution process. But their use also poses new problems in the analysis and control of base system stability, and the planning of coordinated base and arm motions. For mobile manipulators to be successfully and efficiently used, a thorough understanding of their kinematics, stability, and capabilities is required. Moreover, because mobile manipulators typically possess a large number of actuators, new and efficient methods to coordinate their large numbers of degrees of freedom are needed to make them practically deployable. This thesis develops new kinematic and stability analyses of mobile manipulation, and new algorithms to efficiently plan their motions.
I first develop detailed and novel descriptions of the kinematics governing the operation of multi- limbed legged robots working in the presence of gravity, and whose limbs may also be simultaneously used for manipulation. The fundamental stance constraint that arises from simple assumptions about friction and the ground contact and feasible motions is derived. Thereafter, a local relationship between joint motions and motions of the robot abdomen and reaching limbs is developed. Baseeon these relationships, one can define and analyze local kinematic qualities including limberness, wrench resistance and local dexterity. While previous researchers have noted the similarity between multi- fingered grasping and quasi-static manipulation, this thesis makes explicit connections between these two problems.
The kinematic expressions form the basis for a local motion planning problem that that determines the joint motions to achieve several simultaneous objectives while maintaining stance stability in the presence of gravity. This problem is translated into a convex quadratic program entitled the balanced priority solution, whose existence and uniqueness properties are developed. This problem is related in spirit to the classical redundancy resoxlution and task-priority approaches. With some simple modifications, this local planning and optimization problem can be extended to handle a large variety of goals and constraints that arise in mobile-manipulation. This local planning problem applies readily to other mobile bases including wheeled and articulated bases. This thesis describes the use of the local planning techniques to generate global plans, as well as for use within a feedback loop. The work in this thesis is motivated in part by many practical tasks involving the Surrogate and RoboSimian robots at NASA/JPL, and a large number of examples involving the two robots, both real and simulated, are provided.
Finally, this thesis provides an analysis of simultaneous force and motion control for multi- limbed legged robots. Starting with a classical linear stiffness relationship, an analysis of this problem for multiple point contacts is described. The local velocity planning problem is extended to include generation of forces, as well as to maintain stability using force-feedback. This thesis also provides a concise, novel definition of static stability, and proves some conditions under which it is satisfied.
Resumo:
The performance of building envelopes and roofing systems significantly depends on accurate knowledge of wind loads and the response of envelope components under realistic wind conditions. Wind tunnel testing is a well-established practice to determine wind loads on structures. For small structures much larger model scales are needed than for large structures, to maintain modeling accuracy and minimize Reynolds number effects. In these circumstances the ability to obtain a large enough turbulence integral scale is usually compromised by the limited dimensions of the wind tunnel meaning that it is not possible to simulate the low frequency end of the turbulence spectrum. Such flows are called flows with Partial Turbulence Simulation.^ In this dissertation, the test procedure and scaling requirements for tests in partial turbulence simulation are discussed. A theoretical method is proposed for including the effects of low-frequency turbulences in the post-test analysis. In this theory the turbulence spectrum is divided into two distinct statistical processes, one at high frequencies which can be simulated in the wind tunnel, and one at low frequencies which can be treated in a quasi-steady manner. The joint probability of load resulting from the two processes is derived from which full-scale equivalent peak pressure coefficients can be obtained. The efficacy of the method is proved by comparing predicted data derived from tests on large-scale models of the Silsoe Cube and Texas-Tech University buildings in Wall of Wind facility at Florida International University with the available full-scale data.^ For multi-layer building envelopes such as rain-screen walls, roof pavers, and vented energy efficient walls not only peak wind loads but also their spatial gradients are important. Wind permeable roof claddings like roof pavers are not well dealt with in many existing building codes and standards. Large-scale experiments were carried out to investigate the wind loading on concrete pavers including wind blow-off tests and pressure measurements. Simplified guidelines were developed for design of loose-laid roof pavers against wind uplift. The guidelines are formatted so that use can be made of the existing information in codes and standards such as ASCE 7-10 on pressure coefficients on components and cladding.^