875 resultados para Parametrized interval
Resumo:
This paper presents an off-line (finite time interval) and on-line learning direct adaptive neural controller for an unstable helicopter. The neural controller is designed to track pitch rate command signal generated using the reference model. A helicopter having a soft inplane four-bladed hingeless main rotor and a four-bladed tail rotor with conventional mechanical controls is used for the simulation studies. For the simulation study, a linearized helicopter model at different straight and level flight conditions is considered. A neural network with a linear filter architecture trained using backpropagation through time is used to approximate the control law. The controller network parameters are adapted using updated rules Lyapunov synthesis. The off-line trained (for finite time interval) network provides the necessary stability and tracking performance. The on-line learning is used to adapt the network under varying flight conditions. The on-line learning ability is demonstrated through parameter uncertainties. The performance of the proposed direct adaptive neural controller (DANC) is compared with feedback error learning neural controller (FENC).
Resumo:
A direct method of solution is presented for singular integral equations of the first kind, involving the combination of a logarithmic and a Cauchy type singularity. Two typical cages are considered, in one of which the range of integration is a Single finite interval and, in the other, the range of integration is a union of disjoint finite intervals. More such general equations associated with a finite number (greater than two) of finite, disjoint, intervals can also be handled by the technique employed here.
Resumo:
Objectives Funding for early career researchers in Australia's largest medical research funding scheme is determined by a competitive peer-review process using a panel of four reviewers. The purpose of this experiment was to appraise the reliability of funding by duplicating applications that were considered by separate grant review panels. Study Design and Methods Sixty duplicate applications were considered by two independent grant review panels that were awarding funding for Australia's National Health and Medical Research Council. Panel members were blinded to which applications were included in the experiment and to whether it was the original or duplicate application. Scores were compared across panels using Bland–Altman plots to determine measures of agreement, including whether agreement would have impacted on actual funding. Results Twenty-three percent of the applicants were funded by both panels and 60 percent were not funded by both, giving an overall agreement of 83 percent [95% confidence interval (CI): 73%, 92%]. The chance-adjusted agreement was 0.75 (95% CI: 0.58, 0.92). Conclusion There was a comparatively high level of agreement when compared with other types of funding schemes. Further experimental research could be used to determine if this higher agreement is due to nature of the application, the composition of the assessment panel, or the characteristics of the applicants.
Resumo:
A unit cube in k-dimension (or a k-cube) is defined as the Cartesian product R-1 x R-2 x ... x R-k, where each R-i is a closed interval on the real line of the form [a(j), a(i), + 1]. The cubicity of G, denoted as cub(G), is the minimum k such that G is the intersection graph of a collection of k-cubes. Many NP-complete graph problems can be solved efficiently or have good approximation ratios in graphs of low cubicity. In most of these cases the first step is to get a low dimensional cube representation of the given graph. It is known that for graph G, cub(G) <= left perpendicular2n/3right perpendicular. Recently it has been shown that for a graph G, cub(G) >= 4(Delta + 1) In n, where n and Delta are the number of vertices and maximum degree of G, respectively. In this paper, we show that for a bipartite graph G = (A boolean OR B, E) with |A| = n(1), |B| = n2, n(1) <= n(2), and Delta' = min {Delta(A),Delta(B)}, where Delta(A) = max(a is an element of A)d(a) and Delta(B) = max(b is an element of B) d(b), d(a) and d(b) being the degree of a and b in G, respectively , cub(G) <= 2(Delta' + 2) bar left rightln n(2)bar left arrow. We also give an efficient randomized algorithm to construct the cube representation of G in 3 (Delta' + 2) bar right arrowIn n(2)bar left arrow dimension. The reader may note that in general Delta' can be much smaller than Delta.
Resumo:
An axis-parallel b-dimensional box is a Cartesian product R-1 x R-2 x ... x R-b where each R-i (for 1 <= i <= b) is a closed interval of the form [a(i), b(i)] on the real line. The boxicity of any graph G, box(G) is the minimum positive integer b such that G can be represented as the intersection graph of axis-parallel b-dimensional boxes. A b-dimensional cube is a Cartesian product R-1 x R-2 x ... x R-b, where each R-i (for 1 <= i <= b) is a closed interval of the form [a(i), a(i) + 1] on the real line. When the boxes are restricted to be axis-parallel cubes in b-dimension, the minimum dimension b required to represent the graph is called the cubicity of the graph (denoted by cub(G)). In this paper we prove that cub(G) <= inverted right perpendicularlog(2) ninverted left perpendicular box(G), where n is the number of vertices in the graph. We also show that this upper bound is tight.Some immediate consequences of the above result are listed below: 1. Planar graphs have cubicity at most 3inverted right perpendicularlog(2) ninvereted left perpendicular.2. Outer planar graphs have cubicity at most 2inverted right perpendicularlog(2) ninverted left perpendicular.3. Any graph of treewidth tw has cubicity at most (tw + 2) inverted right perpendicularlog(2) ninverted left perpendicular. Thus, chordal graphs have cubicity at most (omega + 1) inverted right erpendicularlog(2) ninverted left perpendicular and circular arc graphs have cubicity at most (2 omega + 1)inverted right perpendicularlog(2) ninverted left perpendicular, where omega is the clique number.
Resumo:
Background: High-resolution magnetic resonance (MR) imaging has been used for MR imaging-based structural stress analysis of atherosclerotic plaques. The biomechanical stress profile of stable plaques has been observed to differ from that of unstable plaques; however, the role that structural stresses play in determining plaque vulnerability remains speculative. Methods: A total of 61 patients with previous history of symptomatic carotid artery disease underwent carotid plaque MR imaging. Plaque components of the index artery such as fibrous tissue, lipid content and plaque haemorrhage (PH) were delineated and used for finite element analysis-based maximum structural stress (M-C Stress) quantification. These patients were followed up for 2 years. The clinical end point was occurrence of an ischaemic cerebrovascular event. The association of the time to the clinical end point with plaque morphology and M-C Stress was analysed. Results: During a median follow-up duration of 514 days, 20% of patients (n=12) experienced an ischaemic event in the territory of the index carotid artery. Cox regression analysis indicated that M-C Stress (hazard ratio (HR): 12.98 (95% confidence interval (CI): 1.32-26.67, pZ0.02), fibrous cap (FC) disruption (HR: 7.39 (95% CI: 1.61e33.82), p Z 0.009) and PH (HR: 5.85 (95% CI: 1.27e26.77), p Z 0.02) are associated with the development of subsequent cerebrovascular events. Plaques associated with future events had higher M-C Stress than those which had remained asymptomatic (median (interquartile range, IQR): 330 kPa (229e494) vs. 254 kPa (166-290), p Z0.04). Conclusions: High biomechanical structural stresses, in addition to FC rupture and PH, are associated with subsequent cerebrovascular events.
Resumo:
In analysis of longitudinal data, the variance matrix of the parameter estimates is usually estimated by the 'sandwich' method, in which the variance for each subject is estimated by its residual products. We propose smooth bootstrap methods by perturbing the estimating functions to obtain 'bootstrapped' realizations of the parameter estimates for statistical inference. Our extensive simulation studies indicate that the variance estimators by our proposed methods can not only correct the bias of the sandwich estimator but also improve the confidence interval coverage. We applied the proposed method to a data set from a clinical trial of antibiotics for leprosy.
Resumo:
Following the method of Ioffe and Smilga, the propagation of the baryon current in an external constant axial-vector field is considered. The close similarity of the operator-product expansion with and without an external field is shown to arise from the chiral invariance of gauge interactions in perturbation theory. Several sum rules corresponding to various invariants both for the nucleon and the hyperons are derived. The analysis of the sum rules is carried out by two independent methods, one called the ratio method and the other called the continuum method, paying special attention to the nondiagonal transitions induced by the external field between the ground state and excited states. Up to operators of dimension six, two new external-field-induced vacuum expectation values enter the calculations. Previous work determining these expectation values from PCAC (partial conservation of axial-vector current) are utilized. Our determination from the sum rules of the nucleon axial-vector renormalization constant GA, as well as the Cabibbo coupling constants in the SU3-symmetric limit (ms=0), is in reasonable accord with the experimental values. Uncertainties in the analysis are pointed out. The case of broken flavor SU3 symmetry is also considered. While in the ratio method, the results are stable for variation of the fiducial interval of the Borel mass parameter over which the left-hand side and the right-hand side of the sum rules are matched, in the continuum method the results are less stable. Another set of sum rules determines the value of the linear combination 7F-5D to be ≊0, or D/(F+D)≊(7/12). .AE
Resumo:
Spin-state equilibria in the whole set of LCoO3 (where L stands for a rare-earth metal or Y) have been investigated with the use of 59Co NMR as a probe for the polycrystalline samples (except Ce) in the temperature interval 110-550 K and frequency range 3- 11.6 MHz. Besides confirming the coexistence of the high-spin—low-spin state in this temperature range, a quadrupolar interaction of ∼0.1 -0.5 MHz has been detected for the first time from 59Co NMR. The NMR line shape is found to depend strongly on the relative magnitude of the magnetic and quadrupolar interactions present. Analysis of the powder pattern reveals two basically different types of transferred hyperfine interaction between the lighter and heavier members of the rare-earth series. The first three members of the lighter rare-earth metals La, Pr (rhombohedral), and Nd (tetragonal), exhibit second-order quadrupolar interaction with a zero-asymmetry parameter at lower temperatures. Above a critical temperature TS (dependent on the size of the rare-earth ion), the quadrupolar interaction becomes temperature dependent and eventually gives rise to a first-order interaction thus indicating a possible second-order phase change. Sm and Eu (orthorhombic) exhibit also a second-order quadrupolar interaction with a nonzero asymmetry parameter ((η∼0.47)) at 300 K, while the orthorhombic second-half members (Dy,..., Lu and Y) exhibit first-order quadrupolar interaction at all temperatures. Normal paramagnetic behavior, i.e., a linear variation of Kiso with T-1, has been observed in the heavier rare-earth cobaltites (Er,..., Lu and Y), whereas an anomalous variation has been observed in (La,..., Nd)CoO3. Thus, Kiso increases with increasing temperature in PrCoO3 and NdCoO3. These observations corroborate the model of the spin-state equilibria in LCoO3 originally proposed by Raccah and Goodenough. A high-spin—low-spin ratio, r=1, can be stabilized in the perovskite structure by a cooperative displacement of the oxygen atoms from the high-spin towards the low-spin cation. Where this ordering into high- and low-spin sublattices occurs at r=1, one can anticipate equivalent displacement of all near-neighbor oxygen atoms towards a low-spin cobalt ion. Thus the heavier LCoO3 exhibits a small temperature-independent first-order quadrupolar interaction. Where r<1, the high- and low-spin states are disordered, giving rise to a temperature-dependent second-order quadrupolar interaction with an anomalous Kiso for the lighter LCoO3.
Resumo:
A k-dimensional box is the cartesian product R-1 x R-2 x ... x R-k where each R-i is a closed interval on the real line. The boxicity of a graph G,denoted as box(G), is the minimum integer k such that G is the intersection graph of a collection of k-dimensional boxes. A unit cube in k-dimensional space or a k-cube is defined as the cartesian product R-1 x R-2 x ... x R-k where each Ri is a closed interval on the real line of the form [a(i), a(i) + 1]. The cubicity of G, denoted as cub(G), is the minimum k such that G is the intersection graph of a collection of k-cubes. In this paper we show that cub(G) <= t + inverted right perpendicularlog(n - t)inverted left perpendicular - 1 and box(G) <= left perpendiculart/2right perpendicular + 1, where t is the cardinality of a minimum vertex cover of G and n is the number of vertices of G. We also show the tightness of these upper bounds. F.S. Roberts in his pioneering paper on boxicity and cubicity had shown that for a graph G, box(G) <= left perpendicularn/2right perpendicular and cub(G) <= inverted right perpendicular2n/3inverted left perpendicular, where n is the number of vertices of G, and these bounds are tight. We show that if G is a bipartite graph then box(G) <= inverted right perpendicularn/4inverted left perpendicular and this bound is tight. We also show that if G is a bipartite graph then cub(G) <= n/2 + inverted right perpendicularlog n inverted left perpendicular - 1. We point out that there exist graphs of very high boxicity but with very low chromatic number. For example there exist bipartite (i.e., 2 colorable) graphs with boxicity equal to n/4. Interestingly, if boxicity is very close to n/2, then chromatic number also has to be very high. In particular, we show that if box(G) = n/2 - s, s >= 0, then chi (G) >= n/2s+2, where chi (G) is the chromatic number of G.
Resumo:
Background. In several studies the sudden infant death syndrome (SIDS) has been significantly associated with sleeping in the prone position. It is not known how the prone position increases the risk of SIDS. Methods. We analyzed data from a case-control study (58 infants with SIDS and 120 control infants) and a prospective cohort study (22 infants with SIDS and 213 control infants) in Tasmania. Interactions were examined in matched analyses with a multiplicative model of interaction. Results. In the case-control study, SIDS was significantly associated with sleeping in the prone position, as compared with other positions (unadjusted odds ratio, 4.5; 95 percent confidence interval, 2.1 to 9.6). The strength of this association was increased among infants who slept on natural-fiber mattresses (P = 0.05), infants who were swaddled (P = 0.09), infants who slept in heated rooms (P = 0.006), and infants who had had a recent illness (P = 0.02). These variables had no significant effect on infants who did not sleep in the prone position. A history of recent illness was significantly associated with SIDS among infants who slept prone (odds ratio, 5.7; 95 percent confidence interval, 1.8 to 19) but not among infants who slept in other positions (odds ratio, 0.83). In the cohort study, the risk of SIDS was greater among infants who slept prone on natural-fiber mattresses (odds ratio, 6.6; 95 percent confidence interval, 1.3 to 33) than among infants who slept prone on other types of mattresses (odds ratio, 1.8). Conclusions. When infants sleep prone, the elevated risk of SIDS is increased by each of four factors: the use of natural-fiber mattresses, swaddling, recent illness, and the use of heating in bedrooms.
Resumo:
The ultraviolet bands of mercury bromide have been excited in uncondensed discharge and photographed with a quartz Littrow spectrograph. The class II system, lying between\lambda 2900 å to 2700 å, suggested byWieland as due to the triatomic molecule, has been studied in detail and ascribed to the diatomic molecule. The bands in the regionlambda 2900 å to 2770å have been analysed into two systems which may form the two components of a2 II –2 \sigma electronic transition with a2 II interval equal to 969·4 cm–1.Another system most probably due to2 \sigma–2 \sigma has been observed in the region\lambda 2770 to 2720.
Resumo:
This study investigates the significance of art in Jean-Luc Nancy s philosophy. I argue that the notion of art contributes to some of Nancy s central ontological ideas. Therefore, I consider art s importance in its own right whether art does have ontological significance, and if so, how one should describe this with respect to the theme of presentation. According to my central argument, with his thinking on art Nancy attempts to give one viewpoint to what is called the metaphysics of presence and to its deconstruction. On which grounds, as I propose, may one say that art is not reducible to philosophy? The thesis is divided into two main parts. The first part, Presentation as a Philosophical Theme, is a historical genesis of the central concepts associated with the birth of presentation in Nancy s philosophy. I examine this from the viewpoint of the differentiation between the ontological notions of presentation and representation by concentrating on the influence of Martin Heidegger and Jacques Derrida, as well as of Hegel and Kant. I give an overview of the way in which being or sense for Nancy is to be described as a coming-into-presence or presentation . Therefore, being takes place in its singular plurality. I argue that Nancy redevelops Heidegger s account of being in two principal ways: first, in rethinking the ontico-ontological difference, and secondly, by striving to radicalize the Heideggerian concept of Mitsein, being-with . I equally wish to show the importance of Derrida s notion of différance and its inherence in Nancy s questioning of being that rests on the unfoundedness of existence. The second part, From Ontology to Art, draws on the importance of art and the aesthetic. If, in Nancy, the question of art touches upon its own limit as the limit of nothingness, how is art able to open its own strangeness and our exposure to this strangeness? My aim is to investigate how Nancy s thinking on art finds its place within the conceptual realm of its inherent difference and interval. My central concern is the thought of originary ungroundedness and the plurality of art and of the arts. As for the question of the difference between art and philosophy, I wish to show that what differentiates art from thought is the fact that art exposes what is obvious but not apparent, if apparent is understood in the sense of givenness. As for art s ability to deconstruct Nancy s ontological notions, I suggest that in question in art is its original heterogeneity and diversity. Art is a matter of differing art occurs singularly, as a local difference. With this in mind, I point out that in reflecting on art in terms of spacing and interval, as a thinker of difference Nancy comes closer to Derrida and his idea of différance than to the structure of Heidegger s ontological difference.
Resumo:
Consumer risk assessment is a crucial step in the regulatory approval of pesticide use on food crops. Recently, an additional hurdle has been added to the formal consumer risk assessment process with the introduction of short-term intake or exposure assessment and a comparable short-term toxicity reference, the acute reference dose. Exposure to residues during one meal or over one day is important for short-term or acute intake. Exposure in the short term can be substantially higher than average because the consumption of a food on a single occasion can be very large compared with typical long-term or mean consumption and the food may have a much larger residue than average. Furthermore, the residue level in a single unit of a fruit or vegetable may be higher by a factor (defined as the variability factor, which we have shown to be typically ×3 for the 97.5th percentile unit) than the average residue in the lot. Available marketplace data and supervised residue trial data are examined in an investigation of the variability of residues in units of fruit and vegetables. A method is described for estimating the 97.5th percentile value from sets of unit residue data. Variability appears to be generally independent of the pesticide, the crop, crop unit size and the residue level. The deposition of pesticide on the individual unit during application is probably the most significant factor. The diets used in the calculations ideally come from individual and household surveys with enough consumers of each specific food to determine large portion sizes. The diets should distinguish the different forms of a food consumed, eg canned, frozen or fresh, because the residue levels associated with the different forms may be quite different. Dietary intakes may be calculated by a deterministic method or a probabilistic method. In the deterministic method the intake is estimated with the assumptions of large portion consumption of a ‘high residue’ food (high residue in the sense that the pesticide was used at the highest recommended label rate, the crop was harvested at the smallest interval after treatment and the residue in the edible portion was the highest found in any of the supervised trials in line with these use conditions). The deterministic calculation also includes a variability factor for those foods consumed as units (eg apples, carrots) to allow for the elevated residue in some single units which may not be seen in composited samples. In the probabilistic method the distribution of dietary consumption and the distribution of possible residues are combined in repeated probabilistic calculations to yield a distribution of possible residue intakes. Additional information such as percentage commodity treated and combination of residues from multiple commodities may be incorporated into probabilistic calculations. The IUPAC Advisory Committee on Crop Protection Chemistry has made 11 recommendations relating to acute dietary exposure.
Resumo:
Analogue and digital techniques for linearization of non-linear input-output relationship of transducers are briefly reviewed. The condition required for linearizing a non-linear function y = f(x) using a non-linear analogue-to-digital converter, is explained. A simple technique to construct a non-linear digital-to-analogue converter, based on ' segments of equal digital interval ' is described. The technique was used to build an N-DAC which can be employed in a successive approximation or counter-ramp type ADC to linearize the non-linear transfer function of a thermistor-resistor combination. The possibility of achieving an order of magnitude higher accuracy in the measurement of temperature is shown.