810 resultados para Distributed cognition
Resumo:
This paper describes the procedures used to create a distributed collection of topographic maps of the Austro-Hungarian Empire, the Spezialkarte der Öesterriechisch-ungarnischen Monarchie, Masse. 1:75,000 der natur. This set of maps was published in Vienna over a period of years from 1877 to 1914. The part of the set used in this project includes 776 sheets; all sheets from all editions number over 3,665. The paper contains detailed information on how the maps were converted to digital images, how metadata were prepared, and how Web-browser access was created using ArcIMS Metadata Server. The project, funded by a 2004 National Leadership Grant from the Institute for Museums and Library Science (IMLS), was a joint project of the Homer Babbidge Library Map and Geographic Information Center at the University of Connecticut, the New York Public Library, and the American Geographical Society’s Map Library at the University of Wisconsin Milwaukee.
Resumo:
Von Otto v. Seemen
Resumo:
Background. At present, prostate cancer screening (PCS) guidelines require a discussion of risks, benefits, alternatives, and personal values, making decision aids an important tool to help convey information and to help clarify values. Objective: The overall goal of this study is to provide evidence of the reliability and validity of a PCS anxiety measure and the Decisional Conflict Scale (DCS). Methods. Using data from a randomized, controlled PCS decision aid trial that measured PCS anxiety at baseline and DCS at baseline (T0) and at two-weeks (T2), four psychometric properties were assessed: (1) internal consistency reliability, indicated by factor analysis intraclass correlations and Cronbach's α; (2) construct validity, indicated by patterns of Pearson correlations among subscales; (3) discriminant validity, indicated by the measure's ability to discriminate between undecided men and those with a definite screening intention; and (4) factor validity and invariance using confirmatory factor analyses (CFA). Results. The PCS anxiety measure had adequate internal consistency reliability and good construct and discriminant validity. CFAs indicated that the 3-factor model did not have adequate fit. CFAs for a general PCS anxiety measure and a PSA anxiety measure indicated adequate fit. The general PCS anxiety measure was invariant across clinics. The DCS had adequate internal consistency reliability except for the support subscale and had adequate discriminate validity. Good construct validity was found at the private clinic, but was only found for the feeling informed subscale at the public clinic. The traditional DCS did not have adequate fit at T0 or at T2. The alternative DCS had adequate fit at T0 but was not identified at T2. Factor loadings indicated that two subscales, feeling informed and feeling clear about values, were not distinct factors. Conclusions. Our general PCS anxiety measure can be used in PCS decision aid studies. The alternative DCS may be appropriate for men eligible for PCS. Implications: More emphasis needs to be placed on the development of PCS anxiety items relating to testing procedures. We recommend that the two DCS versions be validated in other samples of men eligible for PCS and in other health care decisions that involve uncertainty. ^
Resumo:
This study examined the effects of skipping breakfast on selected aspects of children's cognition, specifically their memory (both immediate and one week following presentation of stimuli), mental tempo, and problem solving accuracy. Test instruments used included the Hagen Central/Incidental Recall Test, Matching Familiar Figures Test, McCarthy Digit Span and Tapping Tests. The study population consisted of 39 nine-to eleven year old healthy children who were admitted for overnight stays at a clinical research setting for two nights approximately one week apart. The study was designed to be able to adequately monitor and control subjects' food consumption. The design chosen was the cross-over design where randomly on either the first or second visit, the child skipped breakfast. In this way, subjects acted as their own controls. Subjects were tested at noon of both visits, this representing an 18-hour fast.^ Analysis focused on whether or not fasting for this period of time affected an individual's performance. Results indicated that for most of the tests, subjects were not significantly affected by skipping breakfast for one morning. However, on tests of short-term central and incidental recall, subjects who had skipped breakfast recalled significantly more of the incidental cues although they did so at no apparent expense to their storing of central information. In the area of problem-solving accuracy, subjects skipping breakfast at time two made significantly more errors on hard sections of the MFF Test. It should be noted that although a large number of tests were conducted, these two tests showed the only significant differences.^ These significant results in the areas of short-term incidental memory and in problem solving accuracy were interpreted as being an effect of subject fatigue. That is, when subjects missed breakfast, they were more likely to become fatigued and in the novel environment presented in the study setting, it is probable that these subjects responded by entering Class II fatigue which is characterized by behavioral excitability, diffused attention and altered performance patterns. ^
Resumo:
Kelly and Halverson are to be congratulated on their contribution to the field of education. Their efforts in designing The Comprehensive Assessment of Leadership forLearning (CALL) represents a step forward inm the fomative assessment of distributed leadership in schools and their work is noteworthy in its rapid linking of survey assessment data to specific feedback and recommendations for users. Issues relevant to evidence-based practices, implementation, and professional common language are addressed in this commentary.
Resumo:
In the perspective of the so called 'cognitive capitalism', this paper intends to analyze the sharing and customization strategies developed in Brazilian online game communities. Under Bruno Latour's Actor-Network Theory (ANT), this work describes these socio-technical networks electing the human and non human relevant actants for their role on what could also be depicted as an distributed cognitive process (HUTCHINS, 2000). This alternative way of participative consumption deals with the social and creative production of tutorials; in-game and out-game editing and all sorts of gathering, organization and distribution of virtual data. The communities studied are related to the game Pro Evolution Soccer (PES) in their multiple platforms
Resumo:
In the perspective of the so called 'cognitive capitalism', this paper intends to analyze the sharing and customization strategies developed in Brazilian online game communities. Under Bruno Latour's Actor-Network Theory (ANT), this work describes these socio-technical networks electing the human and non human relevant actants for their role on what could also be depicted as an distributed cognitive process (HUTCHINS, 2000). This alternative way of participative consumption deals with the social and creative production of tutorials; in-game and out-game editing and all sorts of gathering, organization and distribution of virtual data. The communities studied are related to the game Pro Evolution Soccer (PES) in their multiple platforms
Resumo:
In the perspective of the so called 'cognitive capitalism', this paper intends to analyze the sharing and customization strategies developed in Brazilian online game communities. Under Bruno Latour's Actor-Network Theory (ANT), this work describes these socio-technical networks electing the human and non human relevant actants for their role on what could also be depicted as an distributed cognitive process (HUTCHINS, 2000). This alternative way of participative consumption deals with the social and creative production of tutorials; in-game and out-game editing and all sorts of gathering, organization and distribution of virtual data. The communities studied are related to the game Pro Evolution Soccer (PES) in their multiple platforms
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
The use of modular or ‘micro’ maximum power point tracking (MPPT) converters at module level in series association, commercially known as “power optimizers”, allows the individual adaptation of each panel to the load, solving part of the problems related to partial shadows and different tilt and/or orientation angles of the photovoltaic (PV) modules. This is particularly relevant in building integrated PV systems. This paper presents useful behavioural analytical studies of cascade MPPT converters and evaluation test results of a prototype developed under a Spanish national research project. On the one hand, this work focuses on the development of new useful expressions which can be used to identify the behaviour of individual MPPT converters applied to each module and connected in series, in a typical grid-connected PV system. On the other hand, a novel characterization method of MPPT converters is developed, and experimental results of the prototype are obtained: when individual partial shading is applied, and they are connected in a typical grid connected PV array
Resumo:
With electricity consumption increasing within the UnitedStates, new paradigms of delivering electricity are required in order to meet demand. One promising option is the increased use of distributedpowergeneration. Already a growing percentage of electricity generation, distributedgeneration locates the power plant physically close to the consumer, avoiding transmission and distribution losses as well as providing the possibility of combined heat and power. Despite the efficiency gains possible, regulators and utilities have been reluctant to implement distributedgeneration, creating numerous technical, regulatory, and business barriers. Certain governments, most notable California, are making concerted efforts to overcome these barriers in order to ensure distributedgeneration plays a part as the country meets demand while shifting to cleaner sources of energy.
Resumo:
Work on distributed data management commenced shortly after the introduction of the relational model in the mid-1970's. 1970's and 1980's were very active periods for the development of distributed relational database technology, and claims were made that in the following ten years centralized databases will be an “antique curiosity” and most organizations will move toward distributed database managers [1]. That prediction has certainly become true, and all commercial DBMSs today are distributed.