932 resultados para A priori
Resumo:
Metacognitive illusions or metacognitive bias is a concept that is a homologous with metacognitve monitor accuracy. In the dissertation, metacognitive illusions mainly refers to the absolute differences between judgment of learning (JOL) and recall because individuals are misguided by some invalid cues or information. JOL is one kind of metacognitive judgments, which is the prediction about the future performance of learned materials. Its mechanism and accuracy are the key issues in the study of JOL. Cue-utilization framework proposed by Koriat (1997) summarized the previous findings and provided a significant advance in understanding how people make JOL. However, the model is not able to explain individual differences in the accuracy of JOL. From the perspective of people’s cognitive bound, our study use posterior associative word pairs easy to produce metacognitive bias to explore the deeper psychological mechanism of metacontive bias. Moreover, we plan to investigate the cause to result in higher metacognitive illusions of children with LD. Based on these, the study tries to look for the method of mending metacognitive illusions. At the same time, we will summarize the findings of this study and previous literatures, and propose a revesied theory for explaining children’s with LD cue selection and utilization according to Koriat’s cue-utilization model. The results of the present study indicated that: (1) Children showed stable metacognitive illusions for the weak associative and posterior associative word pairs, it was not true for strong associative word pairs. It was higher metacognitive illusions for children with LD than normal children. And it was significant grade differences for metacognitive illusions. A priori associative strength exerted a weaker effect on JOL than it did on recall. (2) Children with LD mainly utilized retrieval fluency to make JOL across immediate and delay conditions. However, for normal children, it showed some distinction between encoding fluency and retrieval fluency as potential cues for JOL across immediate and delay conditions. Obviously, children with LD lacked certain flexibility for cue selection and utilization. (3)When word pairs were new list, it showed higher metacognitve transfer effects for analytic inferential group than heuristic inferential group for normal children in the second block. And metacognitive relative accuracy got increased for both children with and without LD across the experimental conditions. However, it was significantly improved only for normal children in analytic inferential group.
Resumo:
Automated assembly of mechanical devices is studies by researching methods of operating assembly equipment in a variable manner; that is, systems which may be configured to perform many different assembly operations are studied. The general parts assembly operation involves the removal of alignment errors within some tolerance and without damaging the parts. Two methods for eliminating alignment errors are discussed: a priori suppression and measurement and removal. Both methods are studied with the more novel measurement and removal technique being studied in greater detail. During the study of this technique, a fast and accurate six degree-of-freedom position sensor based on a light-stripe vision technique was developed. Specifications for the sensor were derived from an assembly-system error analysis. Studies on extracting accurate information from the sensor by optimally reducing redundant information, filtering quantization noise, and careful calibration procedures were performed. Prototype assembly systems for both error elimination techniques were implemented and used to assemble several products. The assembly system based on the a priori suppression technique uses a number of mechanical assembly tools and software systems which extend the capabilities of industrial robots. The need for the tools was determined through an assembly task analysis of several consumer and automotive products. The assembly system based on the measurement and removal technique used the six degree-of-freedom position sensor to measure part misalignments. Robot commands for aligning the parts were automatically calculated based on the sensor data and executed.
Resumo:
The aim of this study was to conduct a systematic review to identify the randomized clinical studies that had investigated the following research question: Is the mandibular manipulation technique an effective and safe technique for the treatment of the temporomandibular joint disk displacement without reduction? the systematic search was conducted in the electronic databases: PubMed (Medical Publications), LILACS (Latin American and Caribbean Literature in Health Sciences), EMBASE (Excerpta Medica Database), PEDro (Physiotherapy Evidence Database), BBO (Brazilian Library of Odontology), CENTRAL (Library Cochrane), and SciELO (Scientific Electronic Library Online). the abstracts of presentations in physical therapy meetings were manually selected, and the articles of the ones that meet the requirements were investigated. No language restrictions were considered. Only randomized and controlled clinical studies were included. Two studies of medium quality fulfilled all the inclusion criteria. There is no sufficient evidence to support the effectiveness of the mandibular manipulation therapy, and therefore its use remains questionable. Being minimally invasive, this therapy is attractive as an initial approach, especially considering the cost of the alternative approaches. the analysis of the results suggests that additional high-quality randomized clinical trials are necessary on the topic, and they should focus on methods for data randomization and allocation, on clearly defined outcomes, on a priori calculated sample size, and on an adequate follow-up strategy.
Resumo:
C.H. Orgill, N.W. Hardy, M.H. Lee, and K.A.I. Sharpe. An application of a multiple agent system for flexible assemble tasks. In Knowledge based envirnments for industrial applications including cooperating expert systems in control. IEE London, 1989.
Resumo:
Canals, A.; Breen, A. R.; Ofman, L.; Moran, P. J.; Fallows, R. A., Estimating random transverse velocities in the fast solar wind from EISCAT Interplanetary Scintillation measurements, Annales Geophysicae, vol. 20, Issue 9, pp.1265-1277
Resumo:
Cook, Anthony; Gibbens, M.J., (2006) 'Constructing Visual Taxonomies by Shape', 18th International Conference on Pattern Recognition (ICPR'06) Volume 2, pp. 732 - 735 RAE2008
Resumo:
The purpose of the article is to present John Yench’s a priori language as a continuation of Leibniz’s idea. Before I proceed to show the project of the Inter-Disciplinary International Reference Language, I would like to discuss the development of Gottfried Wilhelm Leibniz’s view on artificial languages. I will try to show the evolution of Leibniz’s universal language: from its ideal conception to a tool which formalizes the whole of human knowledge. Also, I will show Leibniz’s influence on further ideas of artificial language. I will compare his projects with Yench’s language – Idirl. An analysis of Idirl’s main assumptions will be useful to show the degree of continuation of Leibniz’s ideas in the a priori language of John Yench.
Resumo:
In this paper, a Lyapunov function candidate is introduced for multivariable systems with inner delays, without assuming a priori stability for the nondelayed subsystem. By using this Lyapunov function, a controller is deduced. Such a controller utilizes an input-output description of the original system, a circumstance that facilitates practical applications of the proposed approach.
Resumo:
The topic of this thesis is an acoustic scattering technique for detennining the compressibility and density of individual particles. The particles, which have diameters on the order of 10 µm, are modeled as fluid spheres. Ultrasonic tone bursts of 2 µsec duration and 30 MHz center frequency scatter from individual particles as they traverse the focal region of two confocally positioned transducers. One transducer acts as a receiver while the other both transmits and receives acoustic signals. The resulting scattered bursts are detected at 90° and at 180° (backscattered). Using either the long wavelength (Rayleigh) or the weak scatterer (Born) approximations, it is possible to detennine the compressibility and density of the particle provided we possess a priori knowledge of the particle size and the host properties. The detected scattered signals are digitized and stored in computer memory. With this information we can compute the mean compressibility and density averaged over a population of particles ( typically 1000 particles) or display histograms of scattered amplitude statistics. An experiment was run first run to assess the feasibility of using polystyrene polymer microspheres to calibrate the instrument. A second study was performed on the buffy coat harvested from whole human blood. Finally, chinese hamster ovary cells which were subject to hyperthermia treatment were studied in order to see if the instrument could detect heat induced membrane blebbing.
Resumo:
Acousto-optic (AO) sensing and imaging (AOI) is a dual-wave modality that combines ultrasound with diffusive light to measure and/or image the optical properties of optically diffusive media, including biological tissues such as breast and brain. The light passing through a focused ultrasound beam undergoes a phase modulation at the ultrasound frequency that is detected using an adaptive interferometer scheme employing a GaAs photorefractive crystal (PRC). The PRC-based AO system operating at 1064 nm is described, along with the underlying theory, validating experiments, characterization, and optimization of this sensing and imaging apparatus. The spatial resolution of AO sensing, which is determined by spatial dimensions of the ultrasound beam or pulse, can be sub-millimeter for megahertz-frequency sound waves.A modified approach for quantifying the optical properties of diffuse media with AO sensing employs the ratio of AO signals generated at two different ultrasound focal pressures. The resulting “pressure contrast signal” (PCS), once calibrated for a particular set of pressure pulses, yields a direct measure of the spatially averaged optical transport attenuation coefficient within the interaction volume between light and sound. This is a significant improvement over current AO sensing methods since it produces a quantitative measure of the optical properties of optically diffuse media without a priori knowledge of the background illumination. It can also be used to generate images based on spatial variations in both optical scattering and absorption. Finally, the AO sensing system is modified to monitor the irreversible optical changes associated with the tissue heating from high intensity focused ultrasound (HIFU) therapy, providing a powerful method for noninvasively sensing the onset and growth of thermal lesions in soft tissues. A single HIFU transducer is used to simultaneously generate tissue damage and pump the AO interaction. Experimental results performed in excised chicken breast demonstrate that AO sensing can identify the onset and growth of lesion formation in real time and, when used as feedback to guide exposure parameters, results in more predictable lesion formation.
Resumo:
As distributed information services like the World Wide Web become increasingly popular on the Internet, problems of scale are clearly evident. A promising technique that addresses many of these problems is service (or document) replication. However, when a service is replicated, clients then need the additional ability to find a "good" provider of that service. In this paper we report on techniques for finding good service providers without a priori knowledge of server location or network topology. We consider the use of two principal metrics for measuring distance in the Internet: hops, and round-trip latency. We show that these two metrics yield very different results in practice. Surprisingly, we show data indicating that the number of hops between two hosts in the Internet is not strongly correlated to round-trip latency. Thus, the distance in hops between two hosts is not necessarily a good predictor of the expected latency of a document transfer. Instead of using known or measured distances in hops, we show that the extra cost at runtime incurred by dynamic latency measurement is well justified based on the resulting improved performance. In addition we show that selection based on dynamic latency measurement performs much better in practice that any static selection scheme. Finally, the difference between the distribution of hops and latencies is fundamental enough to suggest differences in algorithms for server replication. We show that conclusions drawn about service replication based on the distribution of hops need to be revised when the distribution of latencies is considered instead.
Resumo:
We propose and evaluate an admission control paradigm for RTDBS, in which a transaction is submitted to the system as a pair of processes: a primary task, and a recovery block. The execution requirements of the primary task are not known a priori, whereas those of the recovery block are known a priori. Upon the submission of a transaction, an Admission Control Mechanism is employed to decide whether to admit or reject that transaction. Once admitted, a transaction is guaranteed to finish executing before its deadline. A transaction is considered to have finished executing if exactly one of two things occur: Either its primary task is completed (successful commitment), or its recovery block is completed (safe termination). Committed transactions bring a profit to the system, whereas a terminated transaction brings no profit. The goal of the admission control and scheduling protocols (e.g., concurrency control, I/O scheduling, memory management) employed in the system is to maximize system profit. We describe a number of admission control strategies and contrast (through simulations) their relative performance.
Resumo:
We propose and evaluate admission control mechanisms for ACCORD, an Admission Control and Capacity Overload management Real-time Database framework-an architecture and a transaction model-for hard deadline RTDB systems. The system architecture consists of admission control and scheduling components which provide early notification of failure to submitted transactions that are deemed not valuable or incapable of completing on time. In particular, our Concurrency Admission Control Manager (CACM) ensures that transactions which are admitted do not overburden the system by requiring a level of concurrency that is not sustainable. The transaction model consists of two components: a primary task and a compensating task. The execution requirements of the primary task are not known a priori, whereas those of the compensating task are known a priori. Upon the submission of a transaction, the Admission Control Mechanisms are employed to decide whether to admit or reject that transaction. Once admitted, a transaction is guaranteed to finish executing before its deadline. A transaction is considered to have finished executing if exactly one of two things occur: Either its primary task is completed (successful commitment), or its compensating task is completed (safe termination). Committed transactions bring a profit to the system, whereas a terminated transaction brings no profit. The goal of the admission control and scheduling protocols (e.g., concurrency control, I/O scheduling, memory management) employed in the system is to maximize system profit. In that respect, we describe a number of concurrency admission control strategies and contrast (through simulations) their relative performance.
Resumo:
Object detection can be challenging when the object class exhibits large variations. One commonly-used strategy is to first partition the space of possible object variations and then train separate classifiers for each portion. However, with continuous spaces the partitions tend to be arbitrary since there are no natural boundaries (for example, consider the continuous range of human body poses). In this paper, a new formulation is proposed, where the detectors themselves are associated with continuous parameters, and reside in a parameterized function space. There are two advantages of this strategy. First, a-priori partitioning of the parameter space is not needed; the detectors themselves are in a parameterized space. Second, the underlying parameters for object variations can be learned from training data in an unsupervised manner. In profile face detection experiments, at a fixed false alarm number of 90, our method attains a detection rate of 75% vs. 70% for the method of Viola-Jones. In hand shape detection, at a false positive rate of 0.1%, our method achieves a detection rate of 99.5% vs. 98% for partition based methods. In pedestrian detection, our method reduces the miss detection rate by a factor of three at a false positive rate of 1%, compared with the method of Dalal-Triggs.
Resumo:
We consider a mobile sensor network monitoring a spatio-temporal field. Given limited cache sizes at the sensor nodes, the goal is to develop a distributed cache management algorithm to efficiently answer queries with a known probability distribution over the spatial dimension. First, we propose a novel distributed information theoretic approach in which the nodes locally update their caches based on full knowledge of the space-time distribution of the monitored phenomenon. At each time instant, local decisions are made at the mobile nodes concerning which samples to keep and whether or not a new sample should be acquired at the current location. These decisions account for minimizing an entropic utility function that captures the average amount of uncertainty in queries given the probability distribution of query locations. Second, we propose a different correlation-based technique, which only requires knowledge of the second-order statistics, thus relaxing the stringent constraint of having a priori knowledge of the query distribution, while significantly reducing the computational overhead. It is shown that the proposed approaches considerably improve the average field estimation error by maintaining efficient cache content. It is further shown that the correlation-based technique is robust to model mismatch in case of imperfect knowledge of the underlying generative correlation structure.