849 resultados para 380303 Computer Perception, Memory and Attention


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Grey Level Co-occurrence Matrices (GLCM) are one of the earliest techniques used for image texture analysis. In this paper we defined a new feature called trace extracted from the GLCM and its implications in texture analysis are discussed in the context of Content Based Image Retrieval (CBIR). The theoretical extension of GLCM to n-dimensional gray scale images are also discussed. The results indicate that trace features outperform Haralick features when applied to CBIR.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cancer treatment is most effective when it is detected early and the progress in treatment will be closely related to the ability to reduce the proportion of misses in the cancer detection task. The effectiveness of algorithms for detecting cancers can be greatly increased if these algorithms work synergistically with those for characterizing normal mammograms. This research work combines computerized image analysis techniques and neural networks to separate out some fraction of the normal mammograms with extremely high reliability, based on normal tissue identification and removal. The presence of clustered microcalcifications is one of the most important and sometimes the only sign of cancer on a mammogram. 60% to 70% of non-palpable breast carcinoma demonstrates microcalcifications on mammograms [44], [45], [46].WT based techniques are applied on the remaining mammograms, those are obviously abnormal, to detect possible microcalcifications. The goal of this work is to improve the detection performance and throughput of screening-mammography, thus providing a ‘second opinion ‘ to the radiologists. The state-of- the- art DWT computation algorithms are not suitable for practical applications with memory and delay constraints, as it is not a block transfonn. Hence in this work, the development of a Block DWT (BDWT) computational structure having low processing memory requirement has also been taken up.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the theoretical investigation of local phenomena (adsorption at surfaces, defects or impurities within a crystal, etc.) one can assume that the effects caused by the local disturbance are only limited to the neighbouring particles. With this model, that is well-known as cluster-approximation, an infinite system can be simulated by a much smaller segment of the surface (Cluster). The size of this segment varies strongly for different systems. Calculations to the convergence of bond distance and binding energy of an adsorbed aluminum atom on an Al(100)-surface showed that more than 100 atoms are necessary to get a sufficient description of surface properties. However with a full-quantummechanical approach these system sizes cannot be calculated because of the effort in computer memory and processor speed. Therefore we developed an embedding procedure for the simulation of surfaces and solids, where the whole system is partitioned in several parts which itsself are treated differently: the internal part (cluster), which is located near the place of the adsorbate, is calculated completely self-consistently and is embedded into an environment, whereas the influence of the environment on the cluster enters as an additional, external potential to the relativistic Kohn-Sham-equations. The basis of the procedure represents the density functional theory. However this means that the choice of the electronic density of the environment constitutes the quality of the embedding procedure. The environment density was modelled in three different ways: atomic densities; of a large prepended calculation without embedding transferred densities; bulk-densities (copied). The embedding procedure was tested on the atomic adsorptions of 'Al on Al(100) and Cu on Cu(100). The result was that if the environment is choices appropriately for the Al-system one needs only 9 embedded atoms to reproduce the results of exact slab-calculations. For the Cu-system first calculations without embedding procedures were accomplished, with the result that already 60 atoms are sufficient as a surface-cluster. Using the embedding procedure the same values with only 25 atoms were obtained. This means a substantial improvement if one takes into consideration that the calculation time increased cubically with the number of atoms. With the embedding method Infinite systems can be treated by molecular methods. Additionally the program code was extended by the possibility to make molecular-dynamic simulations. Now it is possible apart from the past calculations of fixed cores to investigate also structures of small clusters and surfaces. A first application we made with the adsorption of Cu on Cu(100). We calculated the relaxed positions of the atoms that were located close to the adsorption site and afterwards made the full-quantummechanical calculation of this system. We did that procedure for different distances to the surface. Thus a realistic adsorption process could be examined for the first time. It should be remarked that when doing the Cu reference-calculations (without embedding) we begun to parallelize the entire program code. Only because of this aspect the investigations for the 100 atomic Cu surface-clusters were possible. Due to the good efficiency of both the parallelization and the developed embedding procedure we will be able to apply the combination in future. This will help to work on more these areas it will be possible to bring in results of full-relativistic molecular calculations, what will be very interesting especially for the regime of heavy systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Auszeichnungssprache XML dient zur Annotation von Dokumenten und hat sich als Standard-Datenaustauschformat durchgesetzt. Dabei entsteht der Bedarf, XML-Dokumente nicht nur als reine Textdateien zu speichern und zu transferieren, sondern sie auch persistent in besser strukturierter Form abzulegen. Dies kann unter anderem in speziellen XML- oder relationalen Datenbanken geschehen. Relationale Datenbanken setzen dazu bisher auf zwei grundsätzlich verschiedene Verfahren: Die XML-Dokumente werden entweder unverändert als binäre oder Zeichenkettenobjekte gespeichert oder aber aufgespalten, sodass sie in herkömmlichen relationalen Tabellen normalisiert abgelegt werden können (so genanntes „Flachklopfen“ oder „Schreddern“ der hierarchischen Struktur). Diese Dissertation verfolgt einen neuen Ansatz, der einen Mittelweg zwischen den bisherigen Lösungen darstellt und die Möglichkeiten des weiterentwickelten SQL-Standards aufgreift. SQL:2003 definiert komplexe Struktur- und Kollektionstypen (Tupel, Felder, Listen, Mengen, Multimengen), die es erlauben, XML-Dokumente derart auf relationale Strukturen abzubilden, dass der hierarchische Aufbau erhalten bleibt. Dies bietet zwei Vorteile: Einerseits stehen bewährte Technologien, die aus dem Bereich der relationalen Datenbanken stammen, uneingeschränkt zur Verfügung. Andererseits lässt sich mit Hilfe der SQL:2003-Typen die inhärente Baumstruktur der XML-Dokumente bewahren, sodass es nicht erforderlich ist, diese im Bedarfsfall durch aufwendige Joins aus den meist normalisierten und auf mehrere Tabellen verteilten Tupeln zusammenzusetzen. In dieser Arbeit werden zunächst grundsätzliche Fragen zu passenden, effizienten Abbildungsformen von XML-Dokumenten auf SQL:2003-konforme Datentypen geklärt. Darauf aufbauend wird ein geeignetes, umkehrbares Umsetzungsverfahren entwickelt, das im Rahmen einer prototypischen Applikation implementiert und analysiert wird. Beim Entwurf des Abbildungsverfahrens wird besonderer Wert auf die Einsatzmöglichkeit in Verbindung mit einem existierenden, ausgereiften relationalen Datenbankmanagementsystem (DBMS) gelegt. Da die Unterstützung von SQL:2003 in den kommerziellen DBMS bisher nur unvollständig ist, muss untersucht werden, inwieweit sich die einzelnen Systeme für das zu implementierende Abbildungsverfahren eignen. Dabei stellt sich heraus, dass unter den betrachteten Produkten das DBMS IBM Informix die beste Unterstützung für komplexe Struktur- und Kollektionstypen bietet. Um die Leistungsfähigkeit des Verfahrens besser beurteilen zu können, nimmt die Arbeit Untersuchungen des nötigen Zeitbedarfs und des erforderlichen Arbeits- und Datenbankspeichers der Implementierung vor und bewertet die Ergebnisse.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the vision of Mark Weiser on ubiquitous computing, computers are disappearing from the focus of the users and are seamlessly interacting with other computers and users in order to provide information and services. This shift of computers away from direct computer interaction requires another way of applications to interact without bothering the user. Context is the information which can be used to characterize the situation of persons, locations, or other objects relevant for the applications. Context-aware applications are capable of monitoring and exploiting knowledge about external operating conditions. These applications can adapt their behaviour based on the retrieved information and thus to replace (at least a certain amount) the missing user interactions. Context awareness can be assumed to be an important ingredient for applications in ubiquitous computing environments. However, context management in ubiquitous computing environments must reflect the specific characteristics of these environments, for example distribution, mobility, resource-constrained devices, and heterogeneity of context sources. Modern mobile devices are equipped with fast processors, sufficient memory, and with several sensors, like Global Positioning System (GPS) sensor, light sensor, or accelerometer. Since many applications in ubiquitous computing environments can exploit context information for enhancing their service to the user, these devices are highly useful for context-aware applications in ubiquitous computing environments. Additionally, context reasoners and external context providers can be incorporated. It is possible that several context sensors, reasoners and context providers offer the same type of information. However, the information providers can differ in quality levels (e.g. accuracy), representations (e.g. position represented in coordinates and as an address) of the offered information, and costs (like battery consumption) for providing the information. In order to simplify the development of context-aware applications, the developers should be able to transparently access context information without bothering with underlying context accessing techniques and distribution aspects. They should rather be able to express which kind of information they require, which quality criteria this information should fulfil, and how much the provision of this information should cost (not only monetary cost but also energy or performance usage). For this purpose, application developers as well as developers of context providers need a common language and vocabulary to specify which information they require respectively they provide. These descriptions respectively criteria have to be matched. For a matching of these descriptions, it is likely that a transformation of the provided information is needed to fulfil the criteria of the context-aware application. As it is possible that more than one provider fulfils the criteria, a selection process is required. In this process the system has to trade off the provided quality of context and required costs of the context provider against the quality of context requested by the context consumer. This selection allows to turn on context sources only if required. Explicitly selecting context services and thereby dynamically activating and deactivating the local context provider has the advantage that also the resource consumption is reduced as especially unused context sensors are deactivated. One promising solution is a middleware providing appropriate support in consideration of the principles of service-oriented computing like loose coupling, abstraction, reusability, or discoverability of context providers. This allows us to abstract context sensors, context reasoners and also external context providers as context services. In this thesis we present our solution consisting of a context model and ontology, a context offer and query language, a comprehensive matching and mediation process and a selection service. Especially the matching and mediation process and the selection service differ from the existing works. The matching and mediation process allows an autonomous establishment of mediation processes in order to transfer information from an offered representation into a requested representation. In difference to other approaches, the selection service selects not only a service for a service request, it rather selects a set of services in order to fulfil all requests which also facilitates the sharing of services. The approach is extensively reviewed regarding the different requirements and a set of demonstrators shows its usability in real-world scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditionally, we've focussed on the question of how to make a system easy to code the first time, or perhaps on how to ease the system's continued evolution. But if we look at life cycle costs, then we must conclude that the important question is how to make a system easy to operate. To do this we need to make it easy for the operators to see what's going on and to then manipulate the system so that it does what it is supposed to. This is a radically different criterion for success. What makes a computer system visible and controllable? This is a difficult question, but it's clear that today's modern operating systems with nearly 50 million source lines of code are neither. Strikingly, the MIT Lisp Machine and its commercial successors provided almost the same functionality as today's mainstream sytsems, but with only 1 Million lines of code. This paper is a retrospective examination of the features of the Lisp Machine hardware and software system. Our key claim is that by building the Object Abstraction into the lowest tiers of the system, great synergy and clarity were obtained. It is our hope that this is a lesson that can impact tomorrow's designs. We also speculate on how the spirit of the Lisp Machine could be extended to include a comprehensive access control model and how new layers of abstraction could further enrich this model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is a selection of University of Southampton Logos in both vector (svg) and raster (png) formats. These are suitable for use on the web or in small documents and posters. You can open the SVG files using inkscape (http://inkscape.org/download/?lang=en) and edit them directly. The University logo should not be modified and attention should be paid to the branding guidelines found here: http://www.edshare.soton.ac.uk/10481 You must always leave a space the width of an capital O in Southampton on all 4 edges of the logo. The negative space makes it appear more prominently on the page.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

These are a range of logos created in the same way as Mr Patrick McSweeny http://www.edshare.soton.ac.uk/11157. The logo has been extracted from PDF documents and is smoother and accurate to the original logo design. Many thanks to to McSweeny for publishing the logo, in SVG originally, I struggled to find it anywhere else. Files are in Inkscape SVG, PDF and PNG. From Mr Patrick McSweeney: This is a selection of University of Southampton Logos in both vector (svg) and raster (png) formats. These are suitable for use on the web or in small documents and posters. You can open the SVG files using inkscape (http://inkscape.org/download/?lang=en) and edit them directly. The University logo should not be modified and attention should be paid to the branding guidelines found here: http://www.edshare.soton.ac.uk/10481 You must always leave a space the width of an capital O in Southampton on all 4 edges of the logo. The negative space makes it appear more prominently on the page.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objetivos. Caracterizar el perfil neuropsicológico de una muestra de 22 pacientes diagnosticados con VIH/SIDA, de un hospital de 4to nivel de Bogotá. Materiales y métodos. Estudio descriptivo de tipo exploratorio. Se hizo una descripción de las características neuropsicológicas de las personas con VIH/SIDA. Los resultados de la evaluación neuropsicológica de los sujetos se analizaron con el programa SPSS. Las variables registradas fueron edad, genero, escolaridad, tiempo de diagnóstico y funciones cognitivas superiores. Se incluyeron en el estudio los sujetos con diagnóstico de VIH/SIDA y con reportes de quejas subjetivas de memoria. No se excluyeron aquellos sujetos con antecedentes o presencia de alteraciones psiquiátricas. Resultados. Se creó una base de datos de 22 sujetos, de los cuales predominaron participantes del sexo masculino (77.3%); edad promedio 53,5 años. Se encontró que las funciones con mayor compromiso, sin importar tiempo de diagnóstico, fueron la atención sostenida, la memoria declarativa y la función ejecutiva (control inhibitorio). Las funciones más preservadas fueron las visoespaciales. Conclusiones. Es fundamental que los sujetos con diagnóstico de VIH/SIDA, sean valorados desde un inicio por neuropsicología para incluirlos en un protocolo de prevención y rehabilitación cognitiva específico para dicha población. Se recomienda que el protocolo sea diseñado por un equipo multidisciplinar de diferentes profesionales de la salud.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente trabajo es una revisión de literatura que busca contribuir al entendimiento de los procesos psicológicos subyacentes en la Teoría de Roberts sobre Lovemarks que, dentro del campo del marketing, ha buscado reemplazar la idea que se tiene sobre las marcas. La primera parte proporciona una introducción de lo que ha sido la evolución de las marcas desde una perspectiva psicológica y de mercadeo. En la segunda parte se explica la teoría de Lovemarks haciendo énfasis en sus componentes: el eje amor/respeto, las características de misterio, sensualidad e intimidad. Adicionalmente, se soporta esta teoría a través de literatura complementaria y casos de aplicación exitosos. La tercera parte, corresponde a la identificación y análisis de los procesos y aspectos psicológicos que explican la formación de un Lovemark: percepción, memoria, motivación individual y social y emoción. La cuarta y última parte contiene las conclusiones e implicaciones en la formación de la relación entre el consumidor y una marca.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Semantic memory has been studied from various fields. The first models emerged from cognitive psychology from the hand of the division proposed by Tulving between semantic and episodic memory. Over the past thirty years there have been parallel developments in the fields of psycholinguistics, cognitive psychology and cognitive neuropsychology. The present work is to review the contributions that have emerged within the neuropsychology to the study of semantic memory and to present an updated overview of the points of consensus. First, it is defined the term "semantics" conceptually within the field of neuropsychology. Then, there is a dichotomy that passes through both psychological and neuropsychological models on semantic memory: the existence of modals versus amodal representations. Third, there are  developed the main theoretical models in neuropsychology that emerged in an attempt to explain categoryspecific semantic deficits. Finally, more robust contributions and points that still generate some discussion are reviewed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study characterizes the differences and similarities in the repertoire of social skills of children from 12 different categories of special educational needs: autism, hearing impairment, mild intellectual disabilities, moderate intellectual disabilities, visual impairment, phonological disorder, learning disabilities, giftedness and talent, externalizing behavior problems, internalizing behavior problems, internalizing and externalizing behavior problems and attention deficit hyperactivity disorder. Teachers of 120 students in regular and special schools, aged between 6 and 14 years old, from four Brazilian states, responded to the Social Skills Rating System. Children with ADHD, autism, internalizing and externalizing behavior problems and externalizing behavior problems presented comparatively lower frequency of social skills. The intervention needs of each evaluated category are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Back injuries identification and diagnoses in the transition of the Taylor model to the flexiblemodel of production organization, demands a parallel intervention of prevention actors at work. This study uses simultaneously three intervention models (structured action analysis, muscle skeletal symptoms questionnaires and muscle skeletal assessment) for work activities in a packaging plant. In this study seventy and two (72) operative workers participated (28 workers with muscle skeletal evaluation). In an intervention period of 10 months, the physical, cognitive, organizational components and productive process dynamics were evaluated from the muscle skeletal demands issues. The differences established between objective exposure at risk, back injury risk perception, appreciation and a vertebral spine evaluation, in prior and post intervention, determines the structure for a muscle skeletal risk management system. This study explains that back injury symptoms can be more efficiently reduced among operative workers combining measures registered and the adjustment between dynamics, the changes at work and efficient gestures development. Relevance: the results of this study can be used to pre ent back injuries in workers of flexible production processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El trabajo de oficina y, específicamente, el trabajo con computador se realizan en períodos prolongados de trabajo estático, lo que se asocia con el desarrollo de desordenes músculo esqueléticos. En consecuencia, los autores desarrollaron un estudio transversal, a partir de la evaluación de trabajadores de oficina (n=377) de una empresa dedicada a actividades de servicio4(gestión de información y atención al cliente), con el objetivo de explorar la relación entre la estructura del trabajo, la naturaleza de las tareas y la presencia de problemas osteomusculares e identificar los principios de una estrategia que estimule la transición postural. La información se recolectó a través de un formulario que indagó sobre variables relativas al tipo de cargo que desempeña el trabajador, el tiempo dedicado a actividades informáticas, incapacidades, antecedentes médicos y sintomatología actual. Como principales antecedentes médicos en la población evaluada se encontró: hipertensión arterial (HTA), 8%; dislipidemia, 23%; diabetes, 3%, e hipoglicemia, 4%. En los trabajadores evaluados se encontró que el 80% refiere dolor, específicamente relativo al miembro superior: manos, 26%; codos, 3%, y hombros, 4%. En columna cervical, 32%; lumbar, 16%, y dorsal, 6%. Finalmente, se evidenció que el 80% del tiempo laboral del personal estudiado es empleado en actividades de trabajo estáticas, dedicadas en su mayoría a la digitación de datos. Los resultados de este estudio se aplican al desarrollo de principios para el diseño de tareas y de una estrategia que busca potenciar las transiciones de postura en el trabajo.