952 resultados para Practical reasoning
Resumo:
Template matching is concerned with measuring the similarity between patterns of two objects. This paper proposes a memory-based reasoning approach for pattern recognition of binary images with a large template set. It seems that memory-based reasoning intrinsically requires a large database. Moreover, some binary image recognition problems inherently need large template sets, such as the recognition of Chinese characters which needs thousands of templates. The proposed algorithm is based on the Connection Machine, which is the most massively parallel machine to date, using a multiresolution method to search for the matching template. The approach uses the pyramid data structure for the multiresolution representation of templates and the input image pattern. For a given binary image it scans the template pyramid searching the match. A binary image of N × N pixels can be matched in O(log N) time complexity by our algorithm and is independent of the number of templates. Implementation of the proposed scheme is described in detail.
Resumo:
We present an interactive map-based technique for designing single-input-single-output compliant mechanisms that meet the requirements of practical applications. Our map juxtaposes user-specifications with the attributes of real compliant mechanisms stored in a database so that not only the practical feasibility of the specifications can be discerned quickly but also modifications can be done interactively to the existing compliant mechanisms. The practical utility of the method presented here exceeds that of shape and size optimizations because it accounts for manufacturing considerations, stress limits, and material selection. The premise for the method is the spring-leverage (SL) model, which characterizes the kinematic and elastostatic behavior of compliant mechanisms with only three SL constants. The user-specifications are met interactively using the beam-based 2D models of compliant mechanisms by changing their attributes such as: (i) overall size in two planar orthogonal directions, separately and together, (ii) uniform resizing of the in-plane widths of all the beam elements, (iii) uniform resizing of the out-of-plane thick-nesses of the beam elements, and (iv) the material. We present a design software program with a graphical user interface for interactive design. A case-study that describes the design procedure in detail is also presented while additional case-studies are posted on a website. DOI:10.1115/1.4001877].
Resumo:
This study in EU law analyses the reasoning of the Court of Justice (the Court of Justice of the European Union) in a set of its preliminary rulings. Preliminary rulings are answers to national courts questions on the interpretation (and validity) of EU law called preliminary references. These questions concern specific legal issues that have arisen in legal disputes before the national courts. The Court of Justice alone has the ultimate authority to interpret EU law. The preliminary rulings bind the national courts in the cases giving rise to the preliminary reference, and the interpretations of EU law offered in the preliminary rulings are considered generally binding on all instances applying EU law. EU law is often described as a dynamic legal order and the Court of Justice as at the vanguard of developing it. It is generally assumed that the Court of Justice is striving to realise the EU s meta-level purpose (telos): integration. Against this backdrop one can understand the criticism the Court of Justice is often faced with in certain fields of EU law that can be described as developing. This criticism concerns the Court s (negatively) activist way of not just stating the law but developing or even making law. It is difficult to analyse or prove wrong this accusation as it is not in methodological terms clearly established what constitutes judicial activism, or more exactly where the threshold of negative activism lies. Moreover, one popular approach to assessing the role of the Court of Justice described as integration through law has become fairly political, neglecting to take into consideration the special nature of law as both facilitating and constraining action, not merely a medium for furthering integration. This study offers a legal reasoning approach of a more legalist nature, in order to balance the existing mix of approaches to explaining what the Court of Justice does and how. Reliance on legal reasoning is found to offer a working framework for analysis, whereas the tools for an analysis based on activism are found lacking. The legal reasoning approach enables one to assess whether or not the Court of Justice is pertaining to its own established criteria of interpretation of EU law, and if it is not, one should look more in detail at how the interpretation fits with earlier case-law and doctrines of EU law. This study examines the reasoning of the Court of Justice in a set of objectively chosen cases. The emphasis of the study is on analysing how the Court of Justice applies the established criteria of interpretation it has assumed for itself. Moreover, the judgments are assessed not only in terms of reasoning but also for meaningful silences they contain. The analysis is furthermore contextualised by taking into consideration how the cases were commented by legal scholars, their substantive EU law context, and also their larger politico-historical context. In this study, the analysis largely shows that the Court of Justice is interpreting EU law in accordance with its previous practice. Its reasoning retains connection with the linguistic or semiotic criteria of interpretation, while emphasis lies on systemic reasoning. Moreover, although there are a few judgments where the Court of Justice offers clearly dynamic reasoning or what can be considered as substantive reasoning stemming from, for example, common sense or reasonableness, such reasons are most often given in addition to systemic ones. In this sense and even when considered in its broader context, the case-law analysed in this study does not portray a specifically activist image of the Court of Justice. The legal reasoning approach is a valid alternative for explaining how and why the Court of Justice interprets EU law as it does.
Resumo:
In this article we introduce and evaluate testing procedures for specifying the number k of nearest neighbours in the weights matrix of spatial econometric models. The spatial J-test is used for specification search. Two testing procedures are suggested: an increasing neighbours testing procedure and a decreasing neighbours testing procedure. Simulations show that the increasing neighbours testing procedures can be used in large samples to determine k. The decreasing neighbours testing procedure is found to have low power, and is not recommended for use in practice. An empirical example involving house price data is provided to show how to use the testing procedures with real data.
Resumo:
Hardware constraints, which motivate receive antenna selection, also require that various antenna elements at the receiver be sounded sequentially to obtain estimates required for selecting the `best' antenna and for coherently demodulating data thereafter. Consequently, the channel state information at different antennas is outdated by different amounts and corrupted by noise. We show that, for this reason, simply selecting the antenna with the highest estimated channel gain is not optimum. Rather, a preferable strategy is to linearly weight the channel estimates of different antennas differently, depending on the training scheme. We derive closed-form expressions for the symbol error probability (SEP) of AS for MPSK and MQAM in time-varying Rayleigh fading channels for arbitrary selection weights, and validate them with simulations. We then characterize explicitly the optimal selection weights that minimize the SEP. We also consider packet reception, in which multiple symbols of a packet are received by the same antenna. New suboptimal, but computationally efficient weighted selection schemes are proposed for reducing the packet error rate. The benefits of weighted selection are also demonstrated using a practical channel code used in third generation cellular systems. Our results show that optimal weighted selection yields a significant performance gain over conventional unweighted selection.
Resumo:
This paper is concerned with the modifications of the Extended Bellmouth Weir (EBM weir) earlier designed by Keshava Murthy. It is shown that by providing inclined sides (equivalent to providing an inward-trapezoidal weir) over a sector of a circle of radius R, separated by a distance 2t, and depth d, the measurable range of EBM can be considerably enhanced (over 375%). Simultaneously, the other parameters of the weir are optimized such that the reference plane of the weir coincides with its crest making it a constant-accuracy linear weir. Discharge through the aforementioned weir is proportional to the depths of flow measured above the crest of the weir for all heads in the range of 0.5R less-than-or-equal-to h less-than-or-equal-to 7.9R, within a maximum deviation of +/-1% from the theoretical discharge. Experiments with two typical weirs show excellent agreement with the theory by giving a constant-average coefficient of discharge of 0.619
Resumo:
In this paper, we give a method for probabilistic assignment to the Realistic Abductive Reasoning Model, The knowledge is assumed to be represented in the form of causal chaining, namely, hyper-bipartite network. Hyper-bipartite network is the most generalized form of knowledge representation for which, so far, there has been no way of assigning probability to the explanations, First, the inference mechanism using realistic abductive reasoning model is briefly described and then probability is assigned to each of the explanations so as to pick up the explanations in the decreasing order of plausibility.
Resumo:
This paper presents the development of a neural network based power system stabilizer (PSS) designed to enhance the damping characteristics of a practical power system network representing a part of Electricity Generating Authority of Thailand (EGAT) system. The proposed PSS consists of a neuro-identifier and a neuro-controller which have been developed based on functional link network (FLN) model. A recursive on-line training algorithm has been utilized to train the two neural networks. Simulation results have been obtained under various operating conditions and severe disturbance cases which show that the proposed neuro-PSS can provide a better damping to the local as well as interarea modes of oscillations as compared to a conventional PSS
Resumo:
We have developed an efficient fully three-dimensional (3D) reconstruction algorithm for diffuse optical tomography (DOT). The 3D DOT, a severely ill-posed problem, is tackled through a pseudodynamic (PD) approach wherein an ordinary differential equation representing the evolution of the solution on pseudotime is integrated that bypasses an explicit inversion of the associated, ill-conditioned system matrix. One of the most computationally expensive parts of the iterative DOT algorithm, the reevaluation of the Jacobian in each of the iterations, is avoided by using the adjoint-Broyden update formula to provide low rank updates to the Jacobian. In addition, wherever feasible, we have also made the algorithm efficient by integrating along the quadratic path provided by the perturbation equation containing the Hessian. These algorithms are then proven by reconstruction, using simulated and experimental data and verifying the PD results with those from the popular Gauss-Newton scheme. The major findings of this work are as follows: (i) the PD reconstructions are comparatively artifact free, providing superior absorption coefficient maps in terms of quantitative accuracy and contrast recovery; (ii) the scaling of computation time with the dimension of the measurement set is much less steep with the Jacobian update formula in place than without it; and (iii) an increase in the data dimension, even though it renders the reconstruction problem less ill conditioned and thus provides relatively artifact-free reconstructions, does not necessarily provide better contrast property recovery. For the latter, one should also take care to uniformly distribute the measurement points, avoiding regions close to the source so that the relative strength of the derivatives for measurements away from the source does not become insignificant. (c) 2012 Optical Society of America
Resumo:
The rapid emergence of infectious diseases calls for immediate attention to determine practical solutions for intervention strategies. To this end, it becomes necessary to obtain a holistic view of the complex hostpathogen interactome. Advances in omics and related technology have resulted in massive generation of data for the interacting systems at unprecedented levels of detail. Systems-level studies with the aid of mathematical tools contribute to a deeper understanding of biological systems, where intuitive reasoning alone does not suffice. In this review, we discuss different aspects of hostpathogen interactions (HPIs) and the available data resources and tools used to study them. We discuss in detail models of HPIs at various levels of abstraction, along with their applications and limitations. We also enlist a few case studies, which incorporate different modeling approaches, providing significant insights into disease. (c) 2013 Wiley Periodicals, Inc.
Resumo:
The goal of optimization in vehicle design is often blurred by the myriads of requirements belonging to attributes that may not be quite related. If solutions are sought by optimizing attribute performance-related objectives separately starting with a common baseline design configuration as in a traditional design environment, it becomes an arduous task to integrate the potentially conflicting solutions into one satisfactory design. It may be thus more desirable to carry out a combined multi-disciplinary design optimization (MDO) with vehicle weight as an objective function and cross-functional attribute performance targets as constraints. For the particular case of vehicle body structure design, the initial design is likely to be arrived at taking into account styling, packaging and market-driven requirements. The problem with performing a combined cross-functional optimization is the time associated with running such CAE algorithms that can provide a single optimal solution for heterogeneous areas such as NVH and crash safety. In the present paper, a practical MDO methodology is suggested that can be applied to weight optimization of automotive body structures by specifying constraints on frequency and crash performance. Because of the reduced number of cases to be analyzed for crash safety in comparison with other MDO approaches, the present methodology can generate a single size-optimized solution without having to take recourse to empirical techniques such as response surface-based prediction of crash performance and associated successive response surface updating for convergence. An example of weight optimization of spaceframe-based BIW of an aluminum-intensive vehicle is given to illustrate the steps involved in the current optimization process.
Resumo:
FreeRTOS is an open-source real-time microkernel that has a wide community of users. We present the formal specification of the behaviour of the task part of FreeRTOS that deals with the creation, management, and scheduling of tasks using priority-based preemption. Our model is written in the Z notation, and we verify its consistency using the Z/Eves theorem prover. This includes a precise statement of the preconditions for all API commands. This task model forms the basis for three dimensions of further work: (a) the modelling of the rest of the behaviour of queues, time, mutex, and interrupts in FreeRTOS; (b) refinement of the models to code to produce a verified implementation; and (c) extension of the behaviour of FreeRTOS to multi-core architectures. We propose all three dimensions as benchmark challenge problems for Hoare's Verified Software Initiative.