989 resultados para fixed path methods
Resumo:
2000 Mathematics Subject Classification: 65H10.
Resumo:
2000 Mathematics Subject Classification: 65G99, 65K10, 47H04.
Resumo:
The paper reviews some additive and multiplicative properties of ranking procedures used for generalized tournaments with missing values and multiple comparisons. The methods analysed are the score, generalised row sum and least squares as well as fair bets and its variants. It is argued that generalised row sum should be applied not with a fixed parameter, but a variable one proportional to the number of known comparisons. It is shown that a natural additive property has strong links to independence of irrelevant matches, an axiom judged unfavourable when players have different opponents.
Resumo:
Recent discussion regarding whether the noise that limits 2AFC discrimination performance is fixed or variable has focused either on describing experimental methods that presumably dissociate the effects of response mean and variance or on reanalyzing a published data set with the aim of determining how to solve the question through goodness-of-fit statistics. This paper illustrates that the question cannot be solved by fitting models to data and assessing goodness-of-fit because data on detection and discrimination performance can be indistinguishably fitted by models that assume either type of noise when each is coupled with a convenient form for the transducer function. Thus, success or failure at fitting a transducer model merely illustrates the capability (or lack thereof) of some particular combination of transducer function and variance function to account for the data, but it cannot disclose the nature of the noise. We also comment on some of the issues that have been raised in recent exchange on the topic, namely, the existence of additional constraints for the models, the presence of asymmetric asymptotes, the likelihood of history-dependent noise, and the potential of certain experimental methods to dissociate the effects of response mean and variance.
Resumo:
Fixed-step-size (FSS) and Bayesian staircases are widely used methods to estimate sensory thresholds in 2AFC tasks, although a direct comparison of both types of procedure under identical conditions has not previously been reported. A simulation study and an empirical test were conducted to compare the performance of optimized Bayesian staircases with that of four optimized variants of FSS staircase differing as to up-down rule. The ultimate goal was to determine whether FSS or Bayesian staircases are the best choice in experimental psychophysics. The comparison considered the properties of the estimates (i.e. bias and standard errors) in relation to their cost (i.e. the number of trials to completion). The simulation study showed that mean estimates of Bayesian and FSS staircases are dependable when sufficient trials are given and that, in both cases, the standard deviation (SD) of the estimates decreases with number of trials, although the SD of Bayesian estimates is always lower than that of FSS estimates (and thus, Bayesian staircases are more efficient). The empirical test did not support these conclusions, as (1) neither procedure rendered estimates converging on some value, (2) standard deviations did not follow the expected pattern of decrease with number of trials, and (3) both procedures appeared to be equally efficient. Potential factors explaining the discrepancies between simulation and empirical results are commented upon and, all things considered, a sensible recommendation is for psychophysicists to run no fewer than 18 and no more than 30 reversals of an FSS staircase implementing the 1-up/3-down rule.
Resumo:
Abstract
Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.
The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.
The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.
The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.
Resumo:
Free energy calculations are a computational method for determining thermodynamic quantities, such as free energies of binding, via simulation.
Currently, due to computational and algorithmic limitations, free energy calculations are limited in scope.
In this work, we propose two methods for improving the efficiency of free energy calculations.
First, we expand the state space of alchemical intermediates, and show that this expansion enables us to calculate free energies along lower variance paths.
We use Q-learning, a reinforcement learning technique, to discover and optimize paths at low computational cost.
Second, we reduce the cost of sampling along a given path by using sequential Monte Carlo samplers.
We develop a new free energy estimator, pCrooks (pairwise Crooks), a variant on the Crooks fluctuation theorem (CFT), which enables decomposition of the variance of the free energy estimate for discrete paths, while retaining beneficial characteristics of CFT.
Combining these two advancements, we show that for some test models, optimal expanded-space paths have a nearly 80% reduction in variance relative to the standard path.
Additionally, our free energy estimator converges at a more consistent rate and on average 1.8 times faster when we enable path searching, even when the cost of path discovery and refinement is considered.
Resumo:
Background: For tibial fractures, the decision to fix a concomitant fibular fracture is undertaken on a case-by-case basis. To aid in this clinical decision-making process, we investigated whether loss of integrity of the fibula significantly destabilises midshaft tibial fractures, whether fixation of the fibula restores stability to the tibia, and whether removal of the fibula and interosseous membrane for expediency in biomechanical testing significantly influences tibial interfragmentary mechanics. Methods: Tibia/fibula pairs were harvested from six cadaveric donors with the interosseous membrane intact. A tibial osteotomy fracture was fixed by reamed intramedullary (IM) nailing. Axial, torsion, bending, and shear tests were completed for four models of fibular involvement: intact fibula, osteotomy fracture, fibular plating, and resected fibula and interosseous membrane. Findings: Overall construct stiffness decreased slightly with fibular osteotomy compared to intact bone, but this change was not statistically significant. Under low loads, the influence of the fibula on construct stability was only statistically significant in torsion (large effect size). Fibular plating stiffened the construct slightly, but this change was not statistically significant compared to the fibular osteotomy case. Complete resection of the fibula and interosseous membrane significantly decreased construct torsional stiffness only (large effect size). Interpretation: These results suggest that fixation of the fibula may not contribute significantly to the stability of diaphyseal tibial fractures and should not be undertaken unless otherwise clinically indicated. For testing purposes, load-sharing through the interosseous membrane contributes significantly to overall construct mechanics, especially in torsion, and we recommend preservation of these structures when possible.
Resumo:
In settings of intergroup conflict, identifying contextually-relevant risk factors for youth development in an important task. In Vukovar, Croatia, a city devastated during the war in former Yugoslavia, ethno-political tensions remain. The current study utilized a mixed method approach to identify two salient community-level risk factors (ethnic tension and general antisocial behavior) and related emotional insecurity responses (ethnic and non-ethnic insecurity) among youth in Vukovar. In Study 1, focus group discussions (N=66) with mother, fathers, and adolescents 11 to 15-years-old were analyzed using the Constant Comparative Method, revealing two types of risk and insecurity responses. In Study 2, youth (N=227, 58% male, M=15.88 SD=1.12 years old) responded to quantitative scales developed from the focus groups; discriminate validity was demonstrated and path analyses established predictive validity between each type of risk and insecurity. First, community ethnic tension (i.e., threats related to war/ethnic identity) significantly predicted ethnic insecurity for all youth (β=.41, p<.001). Second, experience with community antisocial behavior (i.e., general crime found in any context) predicted non-ethnic community insecurity for girls (β=.32, p<.05), but not for boys. These findings are the first to show multiple forms of emotional insecurity at the community level; implications for future research are discussed.
Resumo:
During the epoch when the first collapsed structures formed (6<z<50) our Universe went through an extended period of changes. Some of the radiation from the first stars and accreting black holes in those structures escaped and changed the state of the Intergalactic Medium (IGM). The era of this global phase change in which the state of the IGM was transformed from cold and neutral to warm and ionized, is called the Epoch of Reionization.In this thesis we focus on numerical methods to calculate the effects of this escaping radiation. We start by considering the performance of the cosmological radiative transfer code C2-Ray. We find that although this code efficiently and accurately solves for the changes in the ionized fractions, it can yield inaccurate results for the temperature changes. We introduce two new elements to improve the code. The first element, an adaptive time step algorithm, quickly determines an optimal time step by only considering the computational cells relevant for this determination. The second element, asynchronous evolution, allows different cells to evolve with different time steps. An important constituent of methods to calculate the effects of ionizing radiation is the transport of photons through the computational domain or ``ray-tracing''. We devise a novel ray tracing method called PYRAMID which uses a new geometry - the pyramidal geometry. This geometry shares properties with both the standard Cartesian and spherical geometries. This makes it on the one hand easy to use in conjunction with a Cartesian grid and on the other hand ideally suited to trace radiation from a radially emitting source. A time-dependent photoionization calculation not only requires tracing the path of photons but also solving the coupled set of photoionization and thermal equations. Several different solvers for these equations are in use in cosmological radiative transfer codes. We conduct a detailed and quantitative comparison of four different standard solvers in which we evaluate how their accuracy depends on the choice of the time step. This comparison shows that their performance can be characterized by two simple parameters and that the C2-Ray generally performs best.
Resumo:
The recent years have witnessed increased development of small, autonomous fixed-wing Unmanned Aerial Vehicles (UAVs). In order to unlock widespread applicability of these platforms, they need to be capable of operating under a variety of environmental conditions. Due to their small size, low weight, and low speeds, they require the capability of coping with wind speeds that are approaching or even faster than the nominal airspeed. In this thesis, a nonlinear-geometric guidance strategy is presented, addressing this problem. More broadly, a methodology is proposed for the high-level control of non-holonomic unicycle-like vehicles in the presence of strong flowfields (e.g. winds, underwater currents) which may outreach the maximum vehicle speed. The proposed strategy guarantees convergence to a safe and stable vehicle configuration with respect to the flowfield, while preserving some tracking performance with respect to the target path. As an alternative approach, an algorithm based on Model Predictive Control (MPC) is developed, and a comparison between advantages and disadvantages of both approaches is drawn. Evaluations in simulations and a challenging real-world flight experiment in very windy conditions confirm the feasibility of the proposed guidance approach.
Resumo:
Background: Medial UKA performed in England and Wales represents 7 to 11% of all knee arthroplasty procedures, and is most commonly performed using mobile-bearing designs. Fixed bearing eliminates the risk of bearing dislocation, however some studies have shown higher revision rates for all-polyethylene tibial components compared to those that utilize metal-backed implants. The aim of the study is to analyse survivorship and maximum 8-year clinical outcome of medial fixed bearing, Uniglide unicompartmental knee arthroplasty performed using an all-polyethylene tibial component with a minimal invasive approach. Methods: Between 2002 and 2009, 270 medial fixed UKAs were performed in our unit. Patients were reviewed pre-operatively, 5 and 8 years post-operatively. Clinical and radiographic reviews were carried out. Patients’ outcome scores (Oxford, WOMAC and American Knee Score) were documented in our database and analysed. Results: Survival and clinical outcome data of 236 knees with a mean 7.3 years follow-up are reported. Every patient with less than 4.93 years follow-up underwent a revision. The patients’ average age at the time of surgery was 69.5 years. The American Knee Society Pain and Function scores, the Oxford Knee Score and the WOMAC score all improved significantly. The 5 years survival rate was 94.1% with implant revision surgery as an end point. The estimated 10 years survival rate is 91.3%. 14 patients were revised before the 5 year follow-up. Conclusion: Fixed bearing Uniglide UKA with an all-polyethylene tibial component is a valuable tool in the management of a medial compartment osteoarthritis, affording good short term survivorship.
Resumo:
Reliability and dependability modeling can be employed during many stages of analysis of a computing system to gain insights into its critical behaviors. To provide useful results, realistic models of systems are often necessarily large and complex. Numerical analysis of these models presents a formidable challenge because the sizes of their state-space descriptions grow exponentially in proportion to the sizes of the models. On the other hand, simulation of the models requires analysis of many trajectories in order to compute statistically correct solutions. This dissertation presents a novel framework for performing both numerical analysis and simulation. The new numerical approach computes bounds on the solutions of transient measures in large continuous-time Markov chains (CTMCs). It extends existing path-based and uniformization-based methods by identifying sets of paths that are equivalent with respect to a reward measure and related to one another via a simple structural relationship. This relationship makes it possible for the approach to explore multiple paths at the same time,· thus significantly increasing the number of paths that can be explored in a given amount of time. Furthermore, the use of a structured representation for the state space and the direct computation of the desired reward measure (without ever storing the solution vector) allow it to analyze very large models using a very small amount of storage. Often, path-based techniques must compute many paths to obtain tight bounds. In addition to presenting the basic path-based approach, we also present algorithms for computing more paths and tighter bounds quickly. One resulting approach is based on the concept of path composition whereby precomputed subpaths are composed to compute the whole paths efficiently. Another approach is based on selecting important paths (among a set of many paths) for evaluation. Many path-based techniques suffer from having to evaluate many (unimportant) paths. Evaluating the important ones helps to compute tight bounds efficiently and quickly.
Resumo:
International audience
Resumo:
Offshore structures with numerous applications in different environments throughout the world and used at different depths. Due to the expansion of marine industries, including offshore oil industry in Iran , particularly in the Persian Gulf region, in order to more accurately model these structures and to prevent incidents such as the Overturning of the platform serious damage to the South Pars Phase ١٣ was platforms, the use New Technic is essential technologies. One of the methods that are used in the construction of offshore wind turbines, using a pre-pile. In this method, a template is constructed with the dimensions specified in the workshop. After making templates using special vessels for placement in the desired location on the sea bed is carried, then the template is placed on the sea bed, Then, using a hammer for Pile Driving Operation Started Vibration hammer and fit the template of 3 or 4 piles of crushed within this template on the seabed . The next step piling, templates have been removed from the site And Jacket placed on piles. The system was installed on the deck on piles and Consequently Deck Load pile inserted on .It should be noted that the design of these types of platforms, base diameter of the pile diameter independent of the choice as one of the major advantages of this system is. This thesis examines a Template Fixed Platform in the oil Soroush Using the Pre-Piling and the Common Piling systems in the Persian Gulf were studied and the effect of different design compared to the Pre-Piling Platforms Persian Gulf were evaluated. The results suggest that Pre-Piling system compared with conventional systems piling in the Persian Gulf, as a more appropriate model structure and behavior Top Model economic efficiency is selected. It should be noted that all calculations and analyzes were performed using Software Abaqus.