915 resultados para geometric reasoning
Resumo:
Serial reduction in scar thickness has been shown in animal models. We sought whether this reduction in scar thickness may be a result of dilatation of the left ventricle (LV) with stretching and thinning of the wall. Contrast enhanced magnetic resonance imaging (CMRI) was performed to delineate radial scar thickness in 25 patients (age 63±10, 21 men) after myocardial infarction. The LV was divided into 16 segts and the absolute radial scar thickness (ST) and percentage scar to total wall thickness (%ST) were measured. Regional end diastolic (EDV) and end systolic volumes (ESV) of corresponding segments were measured on CMRI. All patients underwent revascularization and serial changes in ST, %ST, and regional volumes were assessed with a mean follow up of 15±5 months. CMRI identified a total of 93 scar segments. An increase in EDV or ESV was associated with a serial reduction inST(versusEDV, r =−0.3, p = 0.01; versusESV, r =−0.3, p = 0.005) and%ST(versusEDV, r =−0.2, p = 0.04; versus ESV, r =−0.3, p = 0.001). For segts associated with a positive increase in EDV (group I) or ESV (group II) there was a significant decrease in ST and %ST, but in those segts with stable EDV (group III) or ESV (group IV) there were no significant changes in ST and %ST (Table).
Resumo:
Back and von Wright have developed algebraic laws for reasoning about loops in the refinement calculus. We extend their work to reasoning about probabilistic loops in the probabilistic refinement calculus. We apply our algebraic reasoning to derive transformation rules for probabilistic action systems. In particular we focus on developing data refinement rules for probabilistic action systems. Our extension is interesting since some well known transformation rules that are applicable to standard programs are not applicable to probabilistic ones: we identify some of these important differences and we develop alternative rules where possible. In particular, our probabilistic action system data refinement rules are new.
Resumo:
This paper incorporates hierarchical structure into the neoclassical theory of the firm. Firms are hierarchical in two respects: the organization of workers in production and the wage structure. The firm’s hierarchy is represented as the sector of a circle, where the radius represents the hierarchy’s height, the width of the sector represents the breadth of the hierarchy at a given height, and the angle of the sector represents span of control for any given supervisor. A perfectly competitive firm then chooses height and width, as well as capital inputs, in order to maximize profit. We analyze the short run and long run impact of changes in scale economies, input substitutability and input and output prices on the firm’s hierarchical structure. We find that the firm unambiguously becomes more hierarchical as the specialization of its workers increases or as its output price increases relative to input prices. The effect of changes in scale economies is contingent on the output price. The model also brings forth an analysis of wage inequality within the firm, which is found to be independent of technological considerations, and only depends on the firm’s wage schedule.
Resumo:
In real-time programming a timeout mechanism allows exceptional behaviour, such as a lack of response, to be handled effectively, while not overly affecting the programming for the normal case. For. example, in a pump controller if the water level has gone below the minimum level and the pump is on and hence pumping in more water, then the water level should rise above the minimum level within a specified time. If not, there is a fault in the system and it should be shut down and an alarm raised. Such a situation can be handled by normal case code that determines when the level has risen above the minimum, plus a timeout case handling the situation when the specified time to reach the minimum has passed. In this paper we introduce a timeout mechanism, give it a formal definition in terms of more basic real-time commands, develop a refinement law for introducing a timeout clause to implement a specification, and give an example of using the law to introduce a timeout. The framework used is a machine-independent real-time programming language, which makes use of a deadline command to represent timing constraints in a machine-independent fashion. This allows a more abstract approach to handling timeouts.
Resumo:
We propose a method for the timing analysis of concurrent real-time programs with hard deadlines. We divide the analysis into a machine-independent and a machine-dependent task. The latter takes into account the execution times of the program on a particular machine. Therefore, our goal is to make the machine-dependent phase of the analysis as simple as possible. We succeed in the sense that the machine-dependent phase remains the same as in the analysis of sequential programs. We shift the complexity introduced by concurrency completely to the machine-independent phase.
Resumo:
Proceedings of the 11th Australasian Remote Sensing and Photogrammetry Conference
Resumo:
Neural networks can be regarded as statistical models, and can be analysed in a Bayesian framework. Generalisation is measured by the performance on independent test data drawn from the same distribution as the training data. Such performance can be quantified by the posterior average of the information divergence between the true and the model distributions. Averaging over the Bayesian posterior guarantees internal coherence; Using information divergence guarantees invariance with respect to representation. The theory generalises the least mean squares theory for linear Gaussian models to general problems of statistical estimation. The main results are: (1)~the ideal optimal estimate is always given by average over the posterior; (2)~the optimal estimate within a computational model is given by the projection of the ideal estimate to the model. This incidentally shows some currently popular methods dealing with hyperpriors are in general unnecessary and misleading. The extension of information divergence to positive normalisable measures reveals a remarkable relation between the dlt dual affine geometry of statistical manifolds and the geometry of the dual pair of Banach spaces Ld and Ldd. It therefore offers conceptual simplification to information geometry. The general conclusion on the issue of evaluating neural network learning rules and other statistical inference methods is that such evaluations are only meaningful under three assumptions: The prior P(p), describing the environment of all the problems; the divergence Dd, specifying the requirement of the task; and the model Q, specifying available computing resources.
Resumo:
Purpose: The aim of this study was to compare a developmental optical coherence tomography (OCT) based contact lens inspection instrument to a widely used geometric inspection instrument (Optimec JCF), to establish the capability of a market focused OCT system. Methods: Measurements of 27 soft spherical contact lenses were made using the Optimec JCF and a new OCT based instrument, the Optimec is830. Twelve of the lenses analysed were specially commissioned from a traditional hydrogel (Contamac GM Advance 49%) and 12 from a silicone hydrogel (Contamac Definitive 65), each set with a range of back optic zone radius (BOZR) and centre thickness (CT) values. Three commercial lenses were also measured; CooperVision MyDay (Stenfilcon A) in −10D, −3D and +6D powers. Two measurements of BOZR, CT and total diameter were made for each lens in temperature controlled saline on both instruments. Results: The results showed that the is830 and JCF measurements were comparable, but that the is830 had a better repeatability coefficient for BOZR (0.065 mm compared to 0.151 mm) and CT (0.008 mm compared to 0.027 mm). Both instruments had similar results for total diameter (0.041 mm compared to 0.044 mm). Conclusions: The OCT based instrument assessed in this study is able to match and improve on the JCF instrument for the measurement of total diameter, back optic zone radius and centre thickness for soft contact lenses in temperature controlled saline.
Resumo:
In a series of studies, I investigated the developmental changes in children’s inductive reasoning strategy, methodological manipulations affecting the trajectory, and driving mechanisms behind the development of category induction. I systematically controlled the nature of the stimuli used, and employed a triad paradigm in which perceptual cues were directly pitted against category membership, to explore under which circumstances children used perceptual or category induction. My induction tasks were designed for children aged 3-9 years old using biologically plausible novel items. In Study 1, I tested 264 children. Using a wide age range allowed me to systematically investigate the developmental trajectory of induction. I also created two degrees of perceptual distractor – high and low – and explored whether the degree of perceptual similarity between target and test items altered children’s strategy preference. A further 52 children were tested in Study 2, to examine whether children showing a perceptual-bias were in fact basing their choice on maturation categories. A gradual transition was observed from perceptual to category induction. However, this transition could not be due to the inability to inhibit high perceptual distractors as children of all ages were equally distracted. Children were also not basing their strategy choices on maturation categories. In Study 3, I investigated category structure (featural vs. relational category rules) and domain (natural vs. artefact) on inductive preference. I tested 403 children. Each child was assigned to either the featural or relational condition, and completed both a natural kind and an artefact task. A further 98 children were tested in Study 4, on the effect of using stimuli labels during the tasks. I observed the same gradual transition from perceptual to category induction preference in Studies 3 and 4. This pattern was stable across domains, but children developed a category-bias one year later for relational categories, arguably due to the greater demands on executive function (EF) posed by these stimuli. Children who received labels during the task made significantly more category choices than those who did not receive labels, possibly due to priming effects. Having investigated influences affecting the developmental trajectory, I continued by exploring the driving mechanism behind the development of category induction. In Study 5, I tested 60 children on a battery of EF tasks as well as my induction task. None of the EF tasks were able to predict inductive variance, therefore EF development is unlikely to be the driving factor behind the transition. Finally in Study 6, I divided 252 children into either a comparison group or an intervention group. The intervention group took part in an interactive educational session at Twycross Zoo about animal adaptations. Both groups took part in four induction tasks, two before and two a week after the zoo visits. There was a significant increase in the number of category choices made in the intervention condition after the zoo visit, a result not observed in the comparison condition. This highlights the role of knowledge in supporting the transition from perceptual to category induction. I suggest that EF development may support induction development, but the driving mechanism behind the transition is an accumulation of knowledge, and an appreciation for the importance of category membership.
Resumo:
Traditional approaches to calculate total factor productivity change through Malmquist indexes rely on distance functions. In this paper we show that the use of distance functions as a means to calculate total factor productivity change may introduce some bias in the analysis, and therefore we propose a procedure that calculates total factor productivity change through observed values only. Our total factor productivity change is then decomposed into efficiency change, technological change, and a residual effect. This decomposition makes use of a non-oriented measure in order to avoid problems associated with the traditional use of radial oriented measures, especially when variable returns to scale technologies are to be compared.