845 resultados para Computer aided analysis, Machine vision, Video surveillance
Resumo:
The constraint paradigm is a model of computation in which values are deduced whenever possible, under the limitation that deductions be local in a certain sense. One may visualize a constraint 'program' as a network of devices connected by wires. Data values may flow along the wires, and computation is performed by the devices. A device computes using only locally available information (with a few exceptions), and places newly derived values on other, locally attached wires. In this way computed values are propagated. An advantage of the constraint paradigm (not unique to it) is that a single relationship can be used in more than one direction. The connections to a device are not labelled as inputs and outputs; a device will compute with whatever values are available, and produce as many new values as it can. General theorem provers are capable of such behavior, but tend to suffer from combinatorial explosion; it is not usually useful to derive all the possible consequences of a set of hypotheses. The constraint paradigm places a certain kind of limitation on the deduction process. The limitations imposed by the constraint paradigm are not the only one possible. It is argued, however, that they are restrictive enough to forestall combinatorial explosion in many interesting computational situations, yet permissive enough to allow useful computations in practical situations. Moreover, the paradigm is intuitive: It is easy to visualize the computational effects of these particular limitations, and the paradigm is a natural way of expressing programs for certain applications, in particular relationships arising in computer-aided design. A number of implementations of constraint-based programming languages are presented. A progression of ever more powerful languages is described, complete implementations are presented and design difficulties and alternatives are discussed. The goal approached, though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that LISP, say, supports automatic storage management.
Resumo:
P-glycoprotein (P-gp), an ATP-binding cassette (ABC) transporter, functions as a biological barrier by extruding cytotoxic agents out of cells, resulting in an obstacle in chemotherapeutic treatment of cancer. In order to aid in the development of potential P-gp inhibitors, we constructed a quantitative structure-activity relationship (QSAR) model of flavonoids as P-gp inhibitors based on Bayesian-regularized neural network (BRNN). A dataset of 57 flavonoids collected from a literature binding to the C-terminal nucleotide-binding domain of mouse P-gp was compiled. The predictive ability of the model was assessed using a test set that was independent of the training set, which showed a standard error of prediction of 0.146 +/- 0.006 (data scaled from 0 to 1). Meanwhile, two other mathematical tools, back-propagation neural network (BPNN) and partial least squares (PLS) were also attempted to build QSAR models. The BRNN provided slightly better results for the test set compared to BPNN, but the difference was not significant according to F-statistic at p = 0.05. The PLS failed to build a reliable model in the present study. Our study indicates that the BRNN-based in silico model has good potential in facilitating the prediction of P-gp flavonoid inhibitors and might be applied in further drug design.
Resumo:
Liu, Yonghuai. Improving ICP with Easy Implementation for Free Form Surface Matching. Pattern Recognition, vol. 37, no. 2, pp. 211-226, 2004.
Resumo:
Liu, Yonghuai. Automatic 3d free form shape matching using the graduated assignment algorithm. Pattern Recognition, vol. 38, no. 10, pp. 1615-1631, 2005.
Resumo:
The article deals with use of case studies for professional preparation of teachers to be. One of the suitable ways to develop professional teaching competences is to apply the method of a case study. A case study means a complex and creative solution for a given teaching situation in simulated teaching conditions. It is based on interactive and situational education and decision taking. A case study improves not only professional and teaching competences for becoming teachers – it also fulfi ls the task to develop at students their auto-evaluating and auto-refl exing skills. To increase professional competences it is mandatory to do a complex analysis of the video-record for the implemented study. A complex analysis is a subject of the research project of a student grant agency at the University of West Bohemia in Pilsen.
Resumo:
Partial occlusions are commonplace in a variety of real world computer vision applications: surveillance, intelligent environments, assistive robotics, autonomous navigation, etc. While occlusion handling methods have been proposed, most methods tend to break down when confronted with numerous occluders in a scene. In this paper, a layered image-plane representation for tracking people through substantial occlusions is proposed. An image-plane representation of motion around an object is associated with a pre-computed graphical model, which can be instantiated efficiently during online tracking. A global state and observation space is obtained by linking transitions between layers. A Reversible Jump Markov Chain Monte Carlo approach is used to infer the number of people and track them online. The method outperforms two state-of-the-art methods for tracking over extended occlusions, given videos of a parking lot with numerous vehicles and a laboratory with many desks and workstations.
Resumo:
Traditionally, attacks on cryptographic algorithms looked for mathematical weaknesses in the underlying structure of a cipher. Side-channel attacks, however, look to extract secret key information based on the leakage from the device on which the cipher is implemented, be it smart-card, microprocessor, dedicated hardware or personal computer. Attacks based on the power consumption, electromagnetic emanations and execution time have all been practically demonstrated on a range of devices to reveal partial secret-key information from which the full key can be reconstructed. The focus of this thesis is power analysis, more specifically a class of attacks known as profiling attacks. These attacks assume a potential attacker has access to, or can control, an identical device to that which is under attack, which allows him to profile the power consumption of operations or data flow during encryption. This assumes a stronger adversary than traditional non-profiling attacks such as differential or correlation power analysis, however the ability to model a device allows templates to be used post-profiling to extract key information from many different target devices using the power consumption of very few encryptions. This allows an adversary to overcome protocols intended to prevent secret key recovery by restricting the number of available traces. In this thesis a detailed investigation of template attacks is conducted, along with how the selection of various attack parameters practically affect the efficiency of the secret key recovery, as well as examining the underlying assumption of profiling attacks in that the power consumption of one device can be used to extract secret keys from another. Trace only attacks, where the corresponding plaintext or ciphertext data is unavailable, are then investigated against both symmetric and asymmetric algorithms with the goal of key recovery from a single trace. This allows an adversary to bypass many of the currently proposed countermeasures, particularly in the asymmetric domain. An investigation into machine-learning methods for side-channel analysis as an alternative to template or stochastic methods is also conducted, with support vector machines, logistic regression and neural networks investigated from a side-channel viewpoint. Both binary and multi-class classification attack scenarios are examined in order to explore the relative strengths of each algorithm. Finally these machine-learning based alternatives are empirically compared with template attacks, with their respective merits examined with regards to attack efficiency.
Resumo:
ct: We introduce a new concept for stimulated-Brillouin-scattering-based slow light in optical fibers that is applicable for broadly-tunable frequency-swept sources. It allows slow light to be achieved, in principle, over the entire transparency window of the optical fiber. We demonstrate a slow light delay of 10 ns at 1.55 μm using a 10-m-long photonic crystal fiber with a source sweep rate of 400 MHz/μs and a pump power of 200 mW. We also show that there exists a maximal delay obtainable by this method, which is set by the SBS threshold, independent of sweep rate. For our fiber with optimum length, this maximum delay is ~38 ns, obtained for a pump power of 760 mW.
Resumo:
We demonstrate a 5-GHz-broadband tunable slow-light device based on stimulated Brillouin scattering in a standard highly-nonlinear optical fiber pumped by a noise-current-modulated laser beam. The noisemodulation waveform uses an optimized pseudo-random distribution of the laser drive voltage to obtain an optimal flat-topped gain profile, which minimizes the pulse distortion and maximizes pulse delay for a given pump power. In comparison with a previous slow-modulation method, eye-diagram and signal-to-noise ratio (SNR) analysis show that this broadband slow-light technique significantly increases the fidelity of a delayed data sequence, while maintaining the delay performance. A fractional delay of 0.81 with a SNR of 5.2 is achieved at the pump power of 350 mW using a 2-km-long highly nonlinear fiber with the fast noise-modulation method, demonstrating a 50% increase in eye-opening and a 36% increase in SNR in the comparison.
Resumo:
Diffuse reflectance spectroscopy with a fiber optic probe is a powerful tool for quantitative tissue characterization and disease diagnosis. Significant systematic errors can arise in the measured reflectance spectra and thus in the derived tissue physiological and morphological parameters due to real-time instrument fluctuations. We demonstrate a novel fiber optic probe with real-time, self-calibration capability that can be used for UV-visible diffuse reflectance spectroscopy in biological tissue in clinical settings. The probe is tested in a number of synthetic liquid phantoms over a wide range of tissue optical properties for significant variations in source intensity fluctuations caused by instrument warm up and day-to-day drift. While the accuracy for extraction of absorber concentrations is comparable to that achieved with the traditional calibration (with a reflectance standard), the accuracy for extraction of reduced scattering coefficients is significantly improved with the self-calibration probe compared to traditional calibration. This technology could be used to achieve instrument-independent diffuse reflectance spectroscopy in vivo and obviate the need for instrument warm up and post∕premeasurement calibration, thus saving up to an hour of precious clinical time.
Resumo:
User supplied knowledge and interaction is a vital component of a toolkit for producing high quality parallel implementations of scalar FORTRAN numerical code. In this paper we consider the necessary components that such a parallelisation toolkit should possess to provide an effective environment to identify, extract and embed user relevant user knowledge. We also examine to what extent these facilities are available in leading parallelisation tools; in particular we discuss how these issues have been addressed in the development of the user interface of the Computer Aided Parallelisation Tools (CAPTools). The CAPTools environment has been designed to enable user exploration, interaction and insertion of user knowledge to facilitate the automatic generation of very efficient parallel code. A key issue in the user's interaction is control of the volume of information so that the user is focused on only that which is needed. User control over the level and extent of information revealed at any phase is supplied using a wide variety of filters. Another issue is the way in which information is communicated. Dependence analysis and its resulting graphs involve a lot of sophisticated rather abstract concepts unlikely to be familiar to most users of parallelising tools. As such, considerable effort has been made to communicate with the user in terms that they will understand. These features, amongst others, and their use in the parallelisation process are described and their effectiveness discussed.
Resumo:
Computer based analysis of evacuation can be performed using one of three different approaches, namely optimisation, simulation or risk assessment. Furthermore, within each approach different means of representing the enclosure, the population, and the behaviour of the population are possible. The myriad of approaches which are available has led to the development of some 22 different evacuation models. This article attempts to describe each of the modelling approaches adopted and critically review the inherent capabilities of each approach. The review is based on available published literature.
Resumo:
The Computer Aided Parallelisation Tools (CAPTools) [Ierotheou, C, Johnson SP, Cross M, Leggett PF, Computer aided parallelisation tools (CAPTools)-conceptual overview and performance on the parallelisation of structured mesh codes, Parallel Computing, 1996;22:163±195] is a set of interactive tools aimed to provide automatic parallelisation of serial FORTRAN Computational Mechanics (CM) programs. CAPTools analyses the user's serial code and then through stages of array partitioning, mask and communication calculation, generates parallel SPMD (Single Program Multiple Data) messages passing FORTRAN. The parallel code generated by CAPTools contains calls to a collection of routines that form the CAPTools communications Library (CAPLib). The library provides a portable layer and user friendly abstraction over the underlying parallel environment. CAPLib contains optimised message passing routines for data exchange between parallel processes and other utility routines for parallel execution control, initialisation and debugging. By compiling and linking with different implementations of the library, the user is able to run on many different parallel environments. Even with today's parallel systems the concept of a single version of a parallel application code is more of an aspiration than a reality. However for CM codes the data partitioning SPMD paradigm requires a relatively small set of message-passing communication calls. This set can be implemented as an intermediate `thin layer' library of message-passing calls that enables the parallel code (especially that generated automatically by a parallelisation tool such as CAPTools) to be as generic as possible. CAPLib is just such a `thin layer' message passing library that supports parallel CM codes, by mapping generic calls onto machine specific libraries (such as CRAY SHMEM) and portable general purpose libraries (such as PVM an MPI). This paper describe CAPLib together with its three perceived advantages over other routes: - as a high level abstraction, it is both easy to understand (especially when generated automatically by tools) and to implement by hand, for the CM community (who are not generally parallel computing specialists); - the one parallel version of the application code is truly generic and portable; - the parallel application can readily utilise whatever message passing libraries on a given machine yield optimum performance.
Resumo:
This paper considers the problem of sequencing n jobs in a three-machine shop with the objective of minimising the maximum completion time. The shop consists of three machines, M1,M2 and M_{3}. A job is first processed on M1 and then is assigned either the route (M2,M_{3}) or the route (M_{3},M2). Thus, for our model the processing route is given by a partial order of machines, as opposed to the linear order of machines for a job shop, or to an arbitrary sequence of machines for an open shop. The main result is on O(nlog n) time heuristic, which generates a schedule with the makespan that is at most 5/3 times the optimum value.
Resumo:
We consider a single machine due date assignment and scheduling problem of minimizing holding costs with no tardy jobs tinder series parallel and somewhat wider class of precedence constraints as well as the properties of series-parallel graphs.