937 resultados para pixel-stack


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The photoacoustic investigations carried out on different photonic materials are presented in this thesis. Photonic materials selected for the investigation are tape cast ceramics, muItilayer dielectric coatings, organic dye doped PVA films and PMMA matrix doped with dye mixtures. The studies are performed by the measurement of photoacoustic signal generated as a result of modulated cw laser irradiation of samples. The gas-microphone scheme is employed for the detection of photoacoustic signal. The different measurements reported here reveal the adaptability and utility of the PA technique for the characterization of photonic materials.Ceramics find applications in the field of microelectronics industry. Tape cast ceramics are the building blocks of many electronic components and certain ceramic tapes are used as thermal barriers. The thermal parameters of these tapes will not be the same as that of thin films of the same materials. Parameters are influenced by the presence of foreign bodies in the matrix and the sample preparation technique. Measurements are done on ceramic tapes of Zirconia, Zirconia-Alumina combination, barium titanate, barium tin titanate, silicon carbide, lead zirconate titanateil'Z'T) and lead magnesium niobate titanate(PMNPT). Various configurations viz. heat reflection geometry and heat transmission geometry of the photoacoustic technique have been used for the evaluation of different thermal parameters of the sample. Heat reflection geometry of the PA cell has been used for the evaluation of thermal effusivity and heat transmission geometry has been made use of in the evaluation of thermal diffusivity. From the thermal diffusivity and thermal effusivity values, thermal conductivity is also calculated. The calculated values are nearly the same as the values reported for pure materials. This shows the feasibility of photoacoustic technique for the thermal characterization of ceramic tapes.Organic dyes find applications as holographic recording medium and as active media for laser operations. Knowledge of the photochemical stability of the material is essential if it has to be used tor any of these applications. Mixing one dye with another can change the properties of the resulting system. Through careful mixing of the dyes in appropriate proportions and incorporating them in polymer matrices, media of required stability can be prepared. Investigations are carried out on Rhodamine 6GRhodamine B mixture doped PMMA samples. Addition of RhB in small amounts is found to stabilize Rh6G against photodegradation and addition of Rh6G into RhB increases the photosensitivity of the latter. The PA technique has been successfully employed for the monitoring of dye mixture doped PMMA sample. The same technique has been used for the monitoring of photodegradation ofa laser dye, cresyl violet doped polyvinyl alcohol also.Another important application of photoacoustic technique is in nondestructive evaluation of layered samples. Depth profiling capability of PA technique has been used for the non-destructive testing of multilayer dielectric films, which are highly reflecting in the wavelength range selected for investigations. Eventhough calculation of thickness of the film is not possible, number of layers present in the system can be found out using PA technique. The phase plot has clear step like discontinuities, the number of which coincides with the number of layers present in the multilayer stack. This shows the sensitivity of PA signal phase to boundaries in a layered structure. This aspect of PA signal can be utilized in non-destructive depth profiling of reflecting samples and for the identification of defects in layered structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern computer systems are plagued with stability and security problems: applications lose data, web servers are hacked, and systems crash under heavy load. Many of these problems or anomalies arise from rare program behavior caused by attacks or errors. A substantial percentage of the web-based attacks are due to buffer overflows. Many methods have been devised to detect and prevent anomalous situations that arise from buffer overflows. The current state-of-art of anomaly detection systems is relatively primitive and mainly depend on static code checking to take care of buffer overflow attacks. For protection, Stack Guards and I-leap Guards are also used in wide varieties.This dissertation proposes an anomaly detection system, based on frequencies of system calls in the system call trace. System call traces represented as frequency sequences are profiled using sequence sets. A sequence set is identified by the starting sequence and frequencies of specific system calls. The deviations of the current input sequence from the corresponding normal profile in the frequency pattern of system calls is computed and expressed as an anomaly score. A simple Bayesian model is used for an accurate detection.Experimental results are reported which show that frequency of system calls represented using sequence sets, captures the normal behavior of programs under normal conditions of usage. This captured behavior allows the system to detect anomalies with a low rate of false positives. Data are presented which show that Bayesian Network on frequency variations responds effectively to induced buffer overflows. It can also help administrators to detect deviations in program flow introduced due to errors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Present work deals with the Preparation and characterization of high-k aluminum oxide thin films by atomic layer deposition for gate dielectric applications.The ever-increasing demand for functionality and speed for semiconductor applications requires enhanced performance, which is achieved by the continuous miniaturization of CMOS dimensions. Because of this miniaturization, several parameters, such as the dielectric thickness, come within reach of their physical limit. As the required oxide thickness approaches the sub- l nm range, SiO 2 become unsuitable as a gate dielectric because its limited physical thickness results in excessive leakage current through the gate stack, affecting the long-term reliability of the device. This leakage issue is solved in the 45 mn technology node by the integration of high-k based gate dielectrics, as their higher k-value allows a physically thicker layer while targeting the same capacitance and Equivalent Oxide Thickness (EOT). Moreover, Intel announced that Atomic Layer Deposition (ALD) would be applied to grow these materials on the Si substrate. ALD is based on the sequential use of self-limiting surface reactions of a metallic and oxidizing precursor. This self-limiting feature allows control of material growth and properties at the atomic level, which makes ALD well-suited for the deposition of highly uniform and conformal layers in CMOS devices, even if these have challenging 3D topologies with high aspect-ratios. ALD has currently acquired the status of state-of-the-art and most preferred deposition technique, for producing nano layers of various materials of technological importance. This technique can be adapted to different situations where precision in thickness and perfection in structures are required, especially in the microelectronic scenario.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present work deals with the A study of morphological opertors with applications. Morphology is now a.necessary tool for engineers involved with imaging applications. Morphological operations have been viewed as filters the properties of which have been well studied (Heijmans, 1994). Another well-known class of non-linear filters is the class of rank order filters (Pitas and Venetsanopoulos, 1990). Soft morphological filters are a combination of morphological and weighted rank order filters (Koskinen, et al., 1991, Kuosmanen and Astola, 1995). They have been introduced to improve the behaviour of traditional morphological filters in noisy environments. The idea was to slightly relax the typical morphological definitions in such a way that a degree of robustness is achieved, while most of the desirable properties of typical morphological operations are maintained. Soft morphological filters are less sensitive to additive noise and to small variations in object shape than typical morphological filters. They can remove positive and negative impulse noise, preserving at the same time small details in images. Currently, Mathematical Morphology allows processing images to enhance fuzzy areas, segment objects, detect edges and analyze structures. The techniques developed for binary images are a major step forward in the application of this theory to gray level images. One of these techniques is based on fuzzy logic and on the theory of fuzzy sets.Fuzzy sets have proved to be strongly advantageous when representing in accuracies, not only regarding the spatial localization of objects in an image but also the membership of a certain pixel to a given class. Such inaccuracies are inherent to real images either because of the presence of indefinite limits between the structures or objects to be segmented within the image due to noisy acquisitions or directly because they are inherent to the image formation methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pollutants that once enter into the earth’s atmosphere become part of the atmosphere and hence their dispersion, dilution, direction of transportation etc. are governed by the meteorological conditions. The thesis deals with the study of the atmospheric dispersion capacity, wind climatology, atmospheric stability, pollutant distribution by means of a model and the suggestions for a comprehensive planning for the industrially developing city, Cochin. The definition, sources, types and effects of air pollution have been dealt with briefly. The influence of various meteorological parameters such as vector wind, temperature and its vertical structure and atmospheric stability in relation to pollutant dispersal have been studied. The importance of inversions, mixing heights, ventilation coefficients were brought out. The spatial variation of mixing heights studies for the first time on a microscale region, serves to delineate the regions of good and poor dispersal capacity. A study of wind direction fluctuation, σθ and its relation to stability and mixing heights were shown to be much useful. It was shown that there is a necessity to look into the method of σθ computation. The development of Gausssian Plume Model along with the application for multiple sources was presented. The pollutant chosen was sulphur dioxide and industrial sources alone were considered. The percentage frequency of occurrence of inversions and isothermals are found to be low in all months during the year. The spatial variation of mixing heights revealed that a single mixing height cannot be taken as a representative for the whole city have low mixing heights and monsoonal months showed lowest mixing heights. The study of ventilation co-efficients showed values less than the required optimum value 6000m2/5. However, the low values may be due to the consideration of surface wind alone instead of the vertically averaged wind. Relatively more calm conditions and light winds during night and strong winds during day time were observed. During the most of the year westerlies during day time and northeasterlies during night time are the dominant winds. Unstable conditions with high values of σθ during day time and stable conditions with lower values of σθ during night time are the prominent features. Monsoonal months showed neutral stability for most of the time. A study σθ of and Pasquill Stability category has revealed the difficulty in giving a unique value of for each stability category. For the first time regression equations have been developed relating mixing heights and σθ. A closer examination of σθ revealed that half of the range of wind direction fluctuations is to be taken, instead of one by sixth, to compute σθ. The spatial distribution of SO2 showed a more or less uniform distribution with a slight intrusion towards south. Winter months showed low concentrations contrary to the expectations. The variations of the concentration is found to be influenced more by the mixing height and the stack height rather than wind speed. In the densely populated areas the concentration is more than the threshold limit value. However, the values reported appear to be high, because no depletion of the material is assumed through dry or wet depositions and also because of the inclusion of calm conditions with a very light wind speed. A reduction of emission during night time with a consequent rise during day time would bring down the levels of pollution. The probable locations for the new industries could be the extreme southeast parts because the concentration towards the north falls off very quickly resulting low concentrations. In such a case pollutant spread would be towards south and west, thus keeping the city interior relatively free from pollution. A more detailed examination of the pollutant spread by means of models that would take the dry and wet depositions may be necessary. Nevertheless, the present model serves to give the trend of the distribution of pollutant concentration with which one can suggest the optimum locations for the new industries

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a region based image retrieval system using the local colour and texture features of image sub regions. The regions of interest (ROI) are roughly identified by segmenting the image into fixed partitions, finding the edge map and applying morphological dilation. The colour and texture features of the ROIs are computed from the histograms of the quantized HSV colour space and Gray Level co- occurrence matrix (GLCM) respectively. Each ROI of the query image is compared with same number of ROIs of the target image that are arranged in the descending order of white pixel density in the regions, using Euclidean distance measure for similarity computation. Preliminary experimental results show that the proposed method provides better retrieving result than retrieval using some of the existing methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Retrieval of similar anatomical structures of brain MR images across patients would help the expert in diagnosis of diseases. In this paper, modified local binary pattern with ternary encoding called modified local ternary pattern (MOD-LTP) is introduced, which is more discriminant and less sensitive to noise in near-uniform regions, to locate slices belonging to the same level from the brain MR image database. The ternary encoding depends on a threshold, which is a user-specified one or calculated locally, based on the variance of the pixel intensities in each window. The variancebased local threshold makes the MOD-LTP more robust to noise and global illumination changes. The retrieval performance is shown to improve by taking region-based moment features of MODLTP and iteratively reweighting the moment features of MOD-LTP based on the user’s feedback. The average rank obtained using iterated and weighted moment features of MOD-LTP with a local variance-based threshold, is one to two times better than rotational invariant LBP (Unay, D., Ekin, A. and Jasinschi, R.S. (2010) Local structure-based region-of-interest retrieval in brain MR images. IEEE Trans. Inf. Technol. Biomed., 14, 897–903.) in retrieving the first 10 relevant images

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detection of Objects in Video is a highly demanding area of research. The Background Subtraction Algorithms can yield better results in Foreground Object Detection. This work presents a Hybrid CodeBook based Background Subtraction to extract the foreground ROI from the background. Codebooks are used to store compressed information by demanding lesser memory usage and high speedy processing. This Hybrid method which uses Block-Based and Pixel-Based Codebooks provide efficient detection results; the high speed processing capability of block based background subtraction as well as high Precision Rate of pixel based background subtraction are exploited to yield an efficient Background Subtraction System. The Block stage produces a coarse foreground area, which is then refined by the Pixel stage. The system’s performance is evaluated with different block sizes and with different block descriptors like 2D-DCT, FFT etc. The Experimental analysis based on statistical measurements yields precision, recall, similarity and F measure of the hybrid system as 88.74%, 91.09%, 81.66% and 89.90% respectively, and thus proves the efficiency of the novel system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die stereoskopische 3-D-Darstellung beruht auf der naturgetreuen Präsentation verschiedener Perspektiven für das rechte und linke Auge. Sie erlangt in der Medizin, der Architektur, im Design sowie bei Computerspielen und im Kino, zukünftig möglicherweise auch im Fernsehen, eine immer größere Bedeutung. 3-D-Displays dienen der zusätzlichen Wiedergabe der räumlichen Tiefe und lassen sich grob in die vier Gruppen Stereoskope und Head-mounted-Displays, Brillensysteme, autostereoskopische Displays sowie echte 3-D-Displays einteilen. Darunter besitzt der autostereoskopische Ansatz ohne Brillen, bei dem N≥2 Perspektiven genutzt werden, ein hohes Potenzial. Die beste Qualität in dieser Gruppe kann mit der Methode der Integral Photography, die sowohl horizontale als auch vertikale Parallaxe kodiert, erreicht werden. Allerdings ist das Verfahren sehr aufwendig und wird deshalb wenig genutzt. Den besten Kompromiss zwischen Leistung und Preis bieten präzise gefertigte Linsenrasterscheiben (LRS), die hinsichtlich Lichtausbeute und optischen Eigenschaften den bereits früher bekannten Barrieremasken überlegen sind. Insbesondere für die ergonomisch günstige Multiperspektiven-3-D-Darstellung wird eine hohe physikalische Monitorauflösung benötigt. Diese ist bei modernen TFT-Displays schon recht hoch. Eine weitere Verbesserung mit dem theoretischen Faktor drei erreicht man durch gezielte Ansteuerung der einzelnen, nebeneinander angeordneten Subpixel in den Farben Rot, Grün und Blau. Ermöglicht wird dies durch die um etwa eine Größenordnung geringere Farbauflösung des menschlichen visuellen Systems im Vergleich zur Helligkeitsauflösung. Somit gelingt die Implementierung einer Subpixel-Filterung, welche entsprechend den physiologischen Gegebenheiten mit dem in Luminanz und Chrominanz trennenden YUV-Farbmodell arbeitet. Weiterhin erweist sich eine Schrägstellung der Linsen im Verhältnis von 1:6 als günstig. Farbstörungen werden minimiert, und die Schärfe der Bilder wird durch eine weniger systematische Vergrößerung der technologisch unvermeidbaren Trennelemente zwischen den Subpixeln erhöht. Der Grad der Schrägstellung ist frei wählbar. In diesem Sinne ist die Filterung als adaptiv an den Neigungswinkel zu verstehen, obwohl dieser Wert für einen konkreten 3-D-Monitor eine Invariante darstellt. Die zu maximierende Zielgröße ist der Parameter Perspektiven-Pixel als Produkt aus Anzahl der Perspektiven N und der effektiven Auflösung pro Perspektive. Der Idealfall einer Verdreifachung wird praktisch nicht erreicht. Messungen mit Hilfe von Testbildern sowie Schrifterkennungstests lieferten einen Wert von knapp über 2. Dies ist trotzdem als eine signifikante Verbesserung der Qualität der 3-D-Darstellung anzusehen. In der Zukunft sind weitere Verbesserungen hinsichtlich der Zielgröße durch Nutzung neuer, feiner als TFT auflösender Technologien wie LCoS oder OLED zu erwarten. Eine Kombination mit der vorgeschlagenen Filtermethode wird natürlich weiterhin möglich und ggf. auch sinnvoll sein.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Excimerlaser sind gepulste Gaslaser, die Laseremission in Form von Linienstrahlung – abhängig von der Gasmischung – im UV erzeugen. Der erste entladungsgepumpte Excimerlaser wurde 1977 von Ischenko demonstriert. Alle kommerziell verfügbaren Excimerlaser sind entladungsgepumpte Systeme. Um eine Inversion der Besetzungsdichte zu erhalten, die notwendig ist, um den Laser zum Anschwingen zu bekommen, muss aufgrund der kurzen Wellenlänge sehr stark gepumpt werden. Diese Pumpleistung muss von einem Impulsleistungsmodul erzeugt werden. Als Schaltelement gebräuchlich sind Thyratrons, Niederdruckschaltröhren, deren Lebensdauer jedoch sehr limitiert ist. Deshalb haben sich seit Mitte der 1990iger Jahre Halbleiterschalter mit Pulskompressionsstufen auch in dieser Anwendung mehr und mehr durchgesetzt. In dieser Arbeit wird versucht, die Pulskompression durch einen direkt schaltenden Halbleiterstapel zu ersetzen und dadurch die Verluste zu reduzieren sowie den Aufwand für diese Pulskompression einzusparen. Zudem kann auch die maximal mögliche Repetitionsrate erhöht werden. Um die Belastung der Bauelemente zu berechnen, wurden für alle Komponenten möglichst einfache, aber leistungsfähige Modelle entwickelt. Da die normalerweise verfügbaren Daten der Bauelemente sich aber auf andere Applikationen beziehen, mussten für alle Bauteile grundlegende Messungen im Zeitbereich der späteren Applikation gemacht werden. Für die nichtlinearen Induktivitäten wurde ein einfaches Testverfahren entwickelt um die Verluste bei sehr hohen Magnetisierungsgeschwindigkeiten zu bestimmen. Diese Messungen sind die Grundlagen für das Modell, das im Wesentlichen eine stromabhängige Induktivität beschreibt. Dieses Modell wurde für den „magnetic assist“ benützt, der die Einschaltverluste in den Halbleitern reduziert. Die Impulskondensatoren wurden ebenfalls mit einem in der Arbeit entwickelten Verfahren nahe den späteren Einsatzparametern vermessen. Dabei zeigte sich, dass die sehr gebräuchlichen Class II Keramikkondensatoren für diese Anwendung nicht geeignet sind. In der Arbeit wurden deshalb Class I Hochspannungs- Vielschicht- Kondensatoren als Speicherbank verwendet, die ein deutlich besseres Verhalten zeigen. Die eingesetzten Halbleiterelemente wurden ebenfalls in einem Testverfahren nahe den späteren Einsatzparametern vermessen. Dabei zeigte sich, dass nur moderne Leistungs-MOSFET´s für diesen Einsatz geeignet sind. Bei den Dioden ergab sich, dass nur Siliziumkarbid (SiC) Schottky Dioden für die Applikation einsetzbar sind. Für die Anwendung sind prinzipiell verschiedene Topologien möglich. Bei näherer Betrachtung zeigt sich jedoch, dass nur die C-C Transfer Anordnung die gewünschten Ergebnisse liefern kann. Diese Topologie wurde realisiert. Sie besteht im Wesentlichen aus einer Speicherbank, die vom Netzteil aufgeladen wird. Aus dieser wird dann die Energie in den Laserkopf über den Schalter transferiert. Aufgrund der hohen Spannungen und Ströme müssen 24 Schaltelemente in Serie und je 4 parallel geschaltet werden. Die Ansteuerung der Schalter wird über hochisolierende „Gate“-Transformatoren erreicht. Es zeigte sich, dass eine sorgfältig ausgelegte dynamische und statische Spannungsteilung für einen sicheren Betrieb notwendig ist. In der Arbeit konnte ein Betrieb mit realer Laserkammer als Last bis 6 kHz realisiert werden, der nur durch die maximal mögliche Repetitionsrate der Laserkammer begrenzt war.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Enhanced reality visualization is the process of enhancing an image by adding to it information which is not present in the original image. A wide variety of information can be added to an image ranging from hidden lines or surfaces to textual or iconic data about a particular part of the image. Enhanced reality visualization is particularly well suited to neurosurgery. By rendering brain structures which are not visible, at the correct location in an image of a patient's head, the surgeon is essentially provided with X-ray vision. He can visualize the spatial relationship between brain structures before he performs a craniotomy and during the surgery he can see what's under the next layer before he cuts through. Given a video image of the patient and a three dimensional model of the patient's brain the problem enhanced reality visualization faces is to render the model from the correct viewpoint and overlay it on the original image. The relationship between the coordinate frames of the patient, the patient's internal anatomy scans and the image plane of the camera observing the patient must be established. This problem is closely related to the camera calibration problem. This report presents a new approach to finding this relationship and develops a system for performing enhanced reality visualization in a surgical environment. Immediately prior to surgery a few circular fiducials are placed near the surgical site. An initial registration of video and internal data is performed using a laser scanner. Following this, our method is fully automatic, runs in nearly real-time, is accurate to within a pixel, allows both patient and camera motion, automatically corrects for changes to the internal camera parameters (focal length, focus, aperture, etc.) and requires only a single image.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

All intelligence relies on search --- for example, the search for an intelligent agent's next action. Search is only likely to succeed in resource-bounded agents if they have already been biased towards finding the right answer. In artificial agents, the primary source of bias is engineering. This dissertation describes an approach, Behavior-Oriented Design (BOD) for engineering complex agents. A complex agent is one that must arbitrate between potentially conflicting goals or behaviors. Behavior-oriented design builds on work in behavior-based and hybrid architectures for agents, and the object oriented approach to software engineering. The primary contributions of this dissertation are: 1.The BOD architecture: a modular architecture with each module providing specialized representations to facilitate learning. This includes one pre-specified module and representation for action selection or behavior arbitration. The specialized representation underlying BOD action selection is Parallel-rooted, Ordered, Slip-stack Hierarchical (POSH) reactive plans. 2.The BOD development process: an iterative process that alternately scales the agent's capabilities then optimizes the agent for simplicity, exploiting tradeoffs between the component representations. This ongoing process for controlling complexity not only provides bias for the behaving agent, but also facilitates its maintenance and extendibility. The secondary contributions of this dissertation include two implementations of POSH action selection, a procedure for identifying useful idioms in agent architectures and using them to distribute knowledge across agent paradigms, several examples of applying BOD idioms to established architectures, an analysis and comparison of the attributes and design trends of a large number of agent architectures, a comparison of biological (particularly mammalian) intelligence to artificial agent architectures, a novel model of primate transitive inference, and many other examples of BOD agents and BOD development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a general, trainable architecture for object detection that has previously been applied to face and peoplesdetection with a new application to car detection in static images. Our technique is a learning based approach that uses a set of labeled training data from which an implicit model of an object class -- here, cars -- is learned. Instead of pixel representations that may be noisy and therefore not provide a compact representation for learning, our training images are transformed from pixel space to that of Haar wavelets that respond to local, oriented, multiscale intensity differences. These feature vectors are then used to train a support vector machine classifier. The detection of cars in images is an important step in applications such as traffic monitoring, driver assistance systems, and surveillance, among others. We show several examples of car detection on out-of-sample images and show an ROC curve that highlights the performance of our system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resumen tomado de la publicaci??n