941 resultados para Nonlinear Decision Functions
Resumo:
Background: Despite being the stiffest airway of the bronchial tree, the trachea undergoes significant deformation due to intrathoracic pressure during breathing. The mechanical properties of the trachea affect the flow in the airway and may contribute to the biological function of the lung. Method: A Fung-type strain energy density function was used to investigate the nonlinear mechanical behavior of tracheal cartilage. A bending test on pig tracheal cartilage was performed and a mathematical model for analyzing the deformation of tracheal cartilage was developed. The constants included in the strain energy density function were determined by fitting the experimental data. Result: The experimental data show that tracheal cartilage is a nonlinear material displaying higher strength in compression than in tension. When the compression forces varied from -0.02 to -0.03 N and from -0.03 to -0.04 N, the deformation ratios were 11.03±2.18% and 7.27±1.59%, respectively. Both were much smaller than the deformation ratios (20.01±4.49%) under tension forces of 0.02 to 0.01 N. The Fung-type strain energy density function can capture this nonlinear behavior very well, whilst the linear stress-strain relation cannot. It underestimates the stability of trachea by exaggerating the displacement in compression. This study may improve our understanding of the nonlinear behavior of tracheal cartilage and it may be useful for the future study on tracheal collapse behavior under physiological and pathological conditions.
Resumo:
Background Tarsal tunnel syndrome is classified as a focal compressive neuropathy of the posterior tibial nerve or one of its associated branches individually or collectively. The tunnel courses deep to fascia, the flexor retinaculum and within the abductor hallucis muscle of the foot/ankle. The condition is rare and regularly under-diagnosed leading to a range of symptoms affecting the plantar margins of the foot. There are many intervention strategies for treating tarsal tunnel syndrome with limited robust evidence to guide the clinical management of this condition. The role of conservative versus surgical interventions at various stages of the disease process remains unclear, and there is a need for a structured, step-wise approach in treating patients with this syndrome based on derived empirical evidence. This narrative review attempts to scrutinize the literature to date by clarifying initial presentation, investigations and definitive treatment for the purpose of assisting future informed clinical decision and prospective research endeavours. Process The literature searches that have been incorporated in compiling a rigorous review of this condition have included: the Cochrane Neuromuscular Group's Specialized Register (Cochrane Library 2013), the databases of EMBASE, AMED, MEDLINE, CINAHL, Physiotherapy evidence database (PEDRO), Biomed Central, Science Direct and Trip Database (1972 to the present). Reference listings of located articles were also searched and scrutinized. Authors and experts within the field of lower-limb orthopaedics were contacted to discuss applicable data. Subject-specific criteria searches utilizing the following key terms were performed across all databases: tarsal tunnel syndrome, tibial neuralgia, compression neuropathy syndromes, tibial nerve impingement, tarsal tunnel neuropathy, entrapment tibial nerve, posterior tibial neuropathy. These search strategies were modified with differing databases, adopting specific sensitivity-searching tools and functions unique to each. This search strategy identified 88 journal articles of relevance for this narrative literature review. Findings This literature review has appraised the clinical significance of tarsal tunnel syndrome, whilst assessing varied management interventions (non-surgical and surgical) for the treatment of this condition in both adults and children. According to our review, there is limited high-level robust evidence to guide and refine the clinical management of tarsal tunnel syndrome. Requirements for small-scaled randomized controlled trials in groups with homogenous aetiology are needed to analyse the effectiveness of specific treatment modalities. Conclusions It is necessary that further research endeavours be pursued for the clinical understanding, assessment and treatment of tarsal tunnel syndrome. Accordingly, a structured approach to managing patients who have been correctly diagnosed with this condition should be formulated on the basis of empirical evidence where possible.
Resumo:
The appealing concept of optimal harvesting is often used in fisheries to obtain new management strategies. However, optimality depends on the objective function, which often varies, reflecting the interests of different groups of people. The aim of maximum sustainable yield is to extract the greatest amount of food from replenishable resources in a sustainable way. Maximum sustainable yield may not be desirable from an economic point of view. Maximum economic yield that maximizes the profit of fishing fleets (harvesting sector) but ignores socio-economic benefits such as employment and other positive externalities. It may be more appropriate to use the maximum economic yield that which is based on the value chain of the overall fishing sector, to reflect better society's interests. How to make more efficient use of a fishery for society rather than fishing operators depends critically on the gain function parameters including multiplier effects and inclusion or exclusion of certain costs. In particular, the optimal effort level based on the overall value chain moves closer to the optimal effort for the maximum sustainable yield because of the multiplier effect. These issues are illustrated using the Australian Northern Prawn Fishery.
Resumo:
A flexible and simple Bayesian decision-theoretic design for dose-finding trials is proposed in this paper. In order to reduce the computational burden, we adopt a working model with conjugate priors, which is flexible to fit all monotonic dose-toxicity curves and produces analytic posterior distributions. We also discuss how to use a proper utility function to reflect the interest of the trial. Patients are allocated based on not only the utility function but also the chosen dose selection rule. The most popular dose selection rule is the one-step-look-ahead (OSLA), which selects the best-so-far dose. A more complicated rule, such as the two-step-look-ahead, is theoretically more efficient than the OSLA only when the required distributional assumptions are met, which is, however, often not the case in practice. We carried out extensive simulation studies to evaluate these two dose selection rules and found that OSLA was often more efficient than two-step-look-ahead under the proposed Bayesian structure. Moreover, our simulation results show that the proposed Bayesian method's performance is superior to several popular Bayesian methods and that the negative impact of prior misspecification can be managed in the design stage.
Resumo:
Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.
Resumo:
Following Ioffe's method of QCD sum rules the structure functions F2(x) for deep inelastic ep and en scattering are calculated. Valence u-quark and d-quark distributions are obtained in the range 0.1 less, approximate x <0.4 and compared with data. In the case of polarized targets the structure function g1(x) and the asymmetry Image Full-size image are calculated. The latter is in satisfactory agreement in sign and magnitude with experiments for x in the range 0.1< x < 0.4.
Resumo:
Robust methods are useful in making reliable statistical inferences when there are small deviations from the model assumptions. The widely used method of the generalized estimating equations can be "robustified" by replacing the standardized residuals with the M-residuals. If the Pearson residuals are assumed to be unbiased from zero, parameter estimators from the robust approach are asymptotically biased when error distributions are not symmetric. We propose a distribution-free method for correcting this bias. Our extensive numerical studies show that the proposed method can reduce the bias substantially. Examples are given for illustration.
Resumo:
A decision-theoretic framework is proposed for designing sequential dose-finding trials with multiple outcomes. The optimal strategy is solvable theoretically via backward induction. However, for dose-finding studies involving k doses, the computational complexity is the same as the bandit problem with k-dependent arms, which is computationally prohibitive. We therefore provide two computationally compromised strategies, which is of practical interest as the computational complexity is greatly reduced: one is closely related to the continual reassessment method (CRM), and the other improves CRM and approximates to the optimal strategy better. In particular, we present the framework for phase I/II trials with multiple outcomes. Applications to a pediatric HIV trial and a cancer chemotherapy trial are given to illustrate the proposed approach. Simulation results for the two trials show that the computationally compromised strategy can perform well and appear to be ethical for allocating patients. The proposed framework can provide better approximation to the optimal strategy if more extensive computing is available.
Resumo:
The primary goal of a phase I trial is to find the maximally tolerated dose (MTD) of a treatment. The MTD is usually defined in terms of a tolerable probability, q*, of toxicity. Our objective is to find the highest dose with toxicity risk that does not exceed q*, a criterion that is often desired in designing phase I trials. This criterion differs from that of finding the dose with toxicity risk closest to q*, that is used in methods such as the continual reassessment method. We use the theory of decision processes to find optimal sequential designs that maximize the expected number of patients within the trial allocated to the highest dose with toxicity not exceeding q*, among the doses under consideration. The proposed method is very general in the sense that criteria other than the one considered here can be optimized and that optimal dose assignment can be defined in terms of patients within or outside the trial. It includes as an important special case the continual reassessment method. Numerical study indicates the strategy compares favourably with other phase I designs.
Resumo:
We report the results of two studies of aspects of the consistency of truncated nonlinear integral equation based theories of freezing: (i) We show that the self-consistent solutions to these nonlinear equations are unfortunately sensitive to the level of truncation. For the hard sphere system, if the Wertheim–Thiele representation of the pair direct correlation function is used, the inclusion of part but not all of the triplet direct correlation function contribution, as has been common, worsens the predictions considerably. We also show that the convergence of the solutions found, with respect to number of reciprocal lattice vectors kept in the Fourier expansion of the crystal singlet density, is slow. These conclusions imply great sensitivity to the quality of the pair direct correlation function employed in the theory. (ii) We show the direct correlation function based and the pair correlation function based theories of freezing can be cast into a form which requires solution of isomorphous nonlinear integral equations. However, in the pair correlation function theory the usual neglect of the influence of inhomogeneity of the density distribution on the pair correlation function is shown to be inconsistent to the lowest order in the change of density on freezing, and to lead to erroneous predictions. The Journal of Chemical Physics is copyrighted by The American Institute of Physics.
Resumo:
We report the results of two studies of aspects of the consistency of truncated nonlinear integral equation based theories of freezing: (i) We show that the self-consistent solutions to these nonlinear equations are unfortunately sensitive to the level of truncation. For the hard sphere system, if the Wertheim–Thiele representation of the pair direct correlation function is used, the inclusion of part but not all of the triplet direct correlation function contribution, as has been common, worsens the predictions considerably. We also show that the convergence of the solutions found, with respect to number of reciprocal lattice vectors kept in the Fourier expansion of the crystal singlet density, is slow. These conclusions imply great sensitivity to the quality of the pair direct correlation function employed in the theory. (ii) We show the direct correlation function based and the pair correlation function based theories of freezing can be cast into a form which requires solution of isomorphous nonlinear integral equations. However, in the pair correlation function theory the usual neglect of the influence of inhomogeneity of the density distribution on the pair correlation function is shown to be inconsistent to the lowest order in the change of density on freezing, and to lead to erroneous predictions. The Journal of Chemical Physics is copyrighted by The American Institute of Physics.
Resumo:
C 19Ha4N203.~xH 2 O, Mr= 347.5, monoclinic, C2, a = 15.473 (3), b = 6.963 (2), c = 20.708 (4) ]1, //=108.2(2) ° , V=2119(2)A 3, Z=4, Ox= 1.089 Mg m -3, ,~(Cu Ktx) = 1.5418 ]1, p = 0.523 mm -~, F(000) = 760.0, T= 293 K, R = 0.068 for 1967 unique reflections. The C=C bond length is 1-447 (6)]1, significantly longer than in ethylene, 1.336 (2)]1. The crystal structure is stabilized by O-H...O hydrogen bonding. Explanation for the observed low second-harmonic-generation efficiency (0.5 times that of urea) is provided.
Resumo:
Stallard (1998, Biometrics 54, 279-294) recently used Bayesian decision theory for sample-size determination in phase II trials. His design maximizes the expected financial gains in the development of a new treatment. However, it results in a very high probability (0.65) of recommending an ineffective treatment for phase III testing. On the other hand, the expected gain using his design is more than 10 times that of a design that tightly controls the false positive error (Thall and Simon, 1994, Biometrics 50, 337-349). Stallard's design maximizes the expected gain per phase II trial, but it does not maximize the rate of gain or total gain for a fixed length of time because the rate of gain depends on the proportion: of treatments forwarding to the phase III study. We suggest maximizing the rate of gain, and the resulting optimal one-stage design becomes twice as efficient as Stallard's one-stage design. Furthermore, the new design has a probability of only 0.12 of passing an ineffective treatment to phase III study.
Resumo:
The minimum cost classifier when general cost functionsare associated with the tasks of feature measurement and classification is formulated as a decision graph which does not reject class labels at intermediate stages. Noting its complexities, a heuristic procedure to simplify this scheme to a binary decision tree is presented. The optimizationof the binary tree in this context is carried out using ynamicprogramming. This technique is applied to the voiced-unvoiced-silence classification in speech processing.
Resumo:
We study the renormalization group flows of the two terminal conductance of a superconducting junction of two Luttinger liquid wires. We compute the power laws associated with the renormalization group flow around the various fixed points of this system using the generators of the SU(4) group to generate the appropriate parametrization of an matrix representing small deviations from a given fixed point matrix [obtained earlier in S. Das, S. Rao, and A. Saha, Phys. Rev. B 77, 155418 (2008)], and we then perform a comprehensive stability analysis. In particular, for the nontrivial fixed point which has intermediate values of transmission, reflection, Andreev reflection, and crossed Andreev reflection, we show that there are eleven independent directions in which the system can be perturbed, which are relevant or irrelevant, and five directions which are marginal. We obtain power laws associated with these relevant and irrelevant perturbations. Unlike the case of the two-wire charge-conserving junction, here we show that there are power laws which are nonlinear functions of V(0) and V(2kF) [where V(k) represents the Fourier transform of the interelectron interaction potential at momentum k]. We also obtain the power law dependence of linear response conductance on voltage bias or temperature around this fixed point.