845 resultados para Event-based timing
Resumo:
Introduction Prospective memory (PM), the ability to remember to perform intended activities in the future (Kliegel & Jäger, 2007), is crucial to succeed in everyday life. PM seems to improve gradually over the childhood years (Zimmermann & Meier, 2006), but yet little is known about PM competences in young school children in general, and even less is known about factors influencing its development. Currently, a number of studies suggest that executive functions (EF) are potentially influencing processes (Ford, Driscoll, Shum & Macaulay, 2012; Mahy & Moses, 2011). Additionally, metacognitive processes (MC: monitoring and control) are assumed to be involved while optimizing one’s performance (Krebs & Roebers, 2010; 2012; Roebers, Schmid, & Roderer, 2009). Yet, the relations between PM, EF and MC remain relatively unspecified. We intend to empirically examine the structural relations between these constructs. Method A cross-sectional study including 119 2nd graders (mage = 95.03, sdage = 4.82) will be presented. Participants (n = 68 girls) completed three EF tasks (stroop, updating, shifting), a computerised event-based PM task and a MC spelling task. The latent variables PM, EF and MC that were represented by manifest variables deriving from the conducted tasks, were interrelated by structural equation modelling. Results Analyses revealed clear associations between the three cognitive constructs PM, EF and MC (rpm-EF = .45, rpm-MC = .23, ref-MC = .20). A three factor model, as opposed to one or two factor models, appeared to fit excellently to the data (chi2(17, 119) = 18.86, p = .34, remsea = .030, cfi = .990, tli = .978). Discussion The results indicate that already in young elementary school children, PM, EF and MC are empirically well distinguishable, but nevertheless substantially interrelated. PM and EF seem to share a substantial amount of variance while for MC, more unique processes may be assumed.
Resumo:
Introduction. Prospective Memory (PM), defined as the ability to remember to perform intended activities at some point in the future (Kliegel & Jäger, 2007), is crucial to succeed in everyday life. PM seems to increase over the childhood years (Zimmermann & Meier, 2006), but yet little is known about PM competences in children in general, but also about factors that influence its development. Currently, a number of studies has focused on factors that might influence PM performance, with EF being potentially influencing mechanisms (Ford, Driscoll, Shum & Macaulay, 2012; Mahy & Moses, 2011). Also metacognitive processes (MC: monitoring and control) are assumed to be involved while learning or optimizing one’s performance (Krebs & Roebers, 2010; 2012; Roebers, Schmid, & Roderer, 2009). Yet, the empirical relation between PM, EF and MC remains rather unclear. We intend to examine relations and explain individual differences in PM performance. Method. An empirical cross-sectional study on 120 2nd graders will be presented. Participants completed six EF tasks (a Stroop, two Updating Tasks, two Shifting Tasks, a Flanker Task), a computerised event-based PM Task and a MC spelling task. Children were tested individually in two sessions of 30 minutes each. Each of the three EF components defined by Miyake, Friedman, Emerson, Witzki & Howerter (2002) was represented by two variables. PM performance was represented by PM accuracy. Metacognitive processes (control, monitoring) were represented separately. Results. Preliminary analyses (SEM) indicate a substantial association between EF (updating, inhibition) and PM. Further, MC seems to be significantly related only to EF. We will explore whether metacognitive monitoring is related to PM monitoring (Roebers, 2002; Mantylä, 2007). As to EF and MC, we expect the two domains to be empirically well distinguishable and nevertheless substantially interrelated. Discussion. The results are discussed on a broader and interindividual level.
Resumo:
In order to reconstruct the temperature of the North Greenland Ice Core Project (NGRIP) site, new measurements of δ15N have been performed covering the time period from the beginning of the Holocene to Dansgaard–Oeschger (DO) event 8. Together with previously measured and mostly published δ15N data, we present for the first time a NGRIP temperature reconstruction for the whole last glacial period from 10 to 120 kyr b2k (thousand years before 2000 AD) including every DO event based on δ15N isotope measurements combined with a firn densification and heat diffusion model. The detected temperature rises at the onset of DO events range from 5 °C (DO 25) up to 16.5 °C (DO 11) with an uncertainty of ±3 °C. To bring measured and modelled data into agreement, we had to reduce the accumulation rate given by the NGRIP ss09sea06bm timescale in some periods by 30 to 35%, especially during the last glacial maximum. A comparison between reconstructed temperature and δ18Oice data confirms that the isotopic composition of the stadial was strongly influenced by seasonality. We evidence an anticorrelation between the variations of the δ18Oice sensitivity to temperature (referred to as α) and obliquity in agreement with a simple Rayleigh distillation model. Finally, we suggest that α might be influenced by the Northern Hemisphere ice sheet volume.
Resumo:
The results of a search for pair production of light top squarks are presented, using 4.7 fb(-1) of root s = 7 TeV proton-proton collisions collected with the ATLAS detector at the Large Hadron Collider. This search targets top squarks with masses similar to, or lighter than, the top quark mass. Final states containing exclusively one or two leptons (e, mu), large missing transverse momentum, light-flavour jets and b-jets are used to reconstruct the top squark pair system. Event-based mass scale variables are used to separate the signal from a large t (t) over bar background. No excess over the Standard Model expectations is found. The results are interpreted in the framework of the Minimal Supersymmetric Standard Model, assuming the top squark decays exclusively to a chargino and a b-quark, while requiring different mass relationships between the Supersymmetric particles in the decay chain. Light top squarks with masses between 123-167 GeV are excluded for neutralino masses around 55 GeV.
Resumo:
This study investigated the empirical differentiation of prospective memory, executive functions, and metacognition and their structural relationships in 119 elementary school children (M = 95 months, SD = 4.8 months). These cognitive abilities share many characteristics on the theoretical level and are all highly relevant in many everyday contexts when intentions must be executed. Nevertheless, their empirical relationships have not been examined on the latent level, although an empirical approach would contribute to our knowledge concerning the differentiation of cognitive abilities during childhood. We administered a computerized event-based prospective memory task, three executive function tasks (updating, inhibition, shifting), and a metacognitive control task in the context of spelling. Confirmatory factor analysis revealed that the three cognitive abilities are already empirically differentiable in young elementary school children. At the same time, prospective memory and executive functions were found to be strongly related, and there was also a close link between prospective memory and metacognitive control. Furthermore, executive functions and metacognitive control were marginally significantly related. The findings are discussed within a framework of developmental differentiation and conceptual similarities and differences.
Resumo:
This study sought to understand the elements affecting the success or failure of strategic repositioning efforts by academic medical centers (AMC). The research question was: What specific elements in the process appear to be most important in determining the success or failure of an AMC.s strategic repositioning? Where success is based on the longterm sustainability of the new position.^ "An organization's strategic position is its perceptual location relative to others" (Gershon, 2003). Hence, strategic repositioning represents a shift from one strategic position within an environment to another (H. Mintzberg, 1987a). A deteriorating value proposition coupled with an unsustainable national health care financing system is forcing AMCs to change their strategic position. Where the value proposition is defined as the health outcome per dollar spent. ^ AMCs are of foundational importance to our health care system. They educate our new physicians, generate significant scientific breakthroughs, and care for our most difficult patients. Yet, their strategic, financial and business acumen leaves them particularly vulnerable in a changing environment. ^ After a literature review revealed limited writing on this subject, the research question was addressed using three separate but parallel exploratory case study inquiries of AMCs that recently underwent a strategic repositioning. Participating in the case studies were the Baylor College of Medicine, the University of Texas M. D. Anderson Cancer Center, and the University of Texas Medical Branch.^ Each case study consisted of two major research segments; a thorough documentation review followed by semi-structured interviews of selected members of their governance board, executive and faculty leadership teams. While each case study.s circumstances varied, their response to the research question, as extracted through thematic coding and analysis of the interviews, had a high degree of commonality.^ The results identified managing the strategic risk surrounding the repositioning and leadership accountability as the two foundational elements of success or failure. Metrics and communication were important process elements. They both play a major role in managing the strategic repositioning risk communication loop. Sustainability, the final element, was the outcome sought.^ Factors leading to strategic repositioning included both internal and external pressures and were primarily financial or mission based. Timing was an important consideration as was the selection of the strategic repositioning endpoint.^ In conclusion, a framework for the strategic repositioning of AMCs was offered that integrates the findings of this study; the elements of success, the factors leading to strategic repositioning, and the risk communication loop. ^
Resumo:
Obesity, among both children and adults, is a growing public health epidemic. One area of interest relates to how and why obesity is developing at such a rapid pace among children. Despite a broad consensus about how controlling feeding practices relate to child food consumption and obesity prevalence, much less is known about how non-controlling feeding practices, including modeling, relate to child food consumption. This study investigates how different forms of parent modeling (no modeling, simple modeling, and enthusiastic modeling) and parent adiposity relate to child food consumption, food preferences, and behaviors towards foods. Participants in this experimental study were 65 children (25 boys and 40 girls) aged 3-9 and their parents. Each parent was trained on how to perform their assigned modeling behavior towards a food identified as neutral (not liked, nor disliked) by their child during a pre-session food-rating task. Parents performed their assigned modeling behavior when cued during a ten-minute observation period with their child. Child food consumption (pieces eaten, grams eaten, and calories consumed) was measured and food behaviors (positive comments toward food and food requests) were recorded by event-based coding. After the session, parents self-reported on their height and weight, and children completed a post-session food-rating task. Results indicate that parent modeling (both simple and enthusiastic forms) did not significantly relate to child food consumption, food preferences, or food requests. However, enthusiastic modeling significantly increased the number of positive food comments made by children. Children's food consumption in response to parent modeling did not differ based on parent obesity status. The practical implications of this study are discussed, along with its strengths and limitations, and directions for future research.^
Resumo:
Speech Technologies can provide important benefits for the development of more usable and safe in-vehicle human-machine interactive systems (HMIs). However mainly due robustness issues, the use of spoken interaction can entail important distractions to the driver. In this challenging scenario, while speech technologies are evolving, further research is necessary to explore how they can be complemented with both other modalities (multimodality) and information from the increasing number of available sensors (context-awareness). The perceived quality of speech technologies can significantly be increased by implementing such policies, which simply try to make the best use of all the available resources; and the in vehicle scenario is an excellent test-bed for this kind of initiatives. In this contribution we propose an event-based HMI design framework which combines context modelling and multimodal interaction using a W3C XML language known as SCXML. SCXML provides a general process control mechanism that is being considered by W3C to improve both voice interaction (VoiceXML) and multimodal interaction (MMI). In our approach we try to anticipate and extend these initiatives presenting a flexible SCXML-based approach for the design of a wide range of multimodal context-aware HMI in-vehicle interfaces. The proposed framework for HMI design and specification has been implemented in an automotive OSGi service platform, and it is being used and tested in the Spanish research project MARTA for the development of several in-vehicle interactive applications.
Resumo:
La gestión de riesgos debe ser entendida como una determinación de vínculos entre lo que se asume como vulnerabilidad, y la forma en la que se determinarían o estimarían la probabilidad en la concurrencia de un determinado hecho, partiendo de la idea de la concurrencia de un fenómeno y las acciones necesarias que deberán llevarse a cabo. El tema de vulnerabilidad y riesgo, cada día toma más importancia a nivel mundial, a medida que pasa el tiempo es más notoria la vulnerabilidad de ciertas poblaciones ante la presencia de determinados peligros naturales como son: inundaciones, desbordes de ríos, deslizamientos de tierra y movimientos sísmicos. La vulnerabilidad aumenta, a medida que crece la deforestación. La construcción en lugares de alto riesgo, como por ejemplo, viviendas a orillas de los ríos, está condicionada por la localización y las condiciones de uso del suelo, infraestructura, construcciones, viviendas, distribución y densidad de población, capacidad de organización, etc. Es ahora donde la gestión de riesgos, juega un papel muy importante en la sociedad moderna, siendo esta cada vez más exigente con los resultados y calidad de productos y servicios, además de cumplir también, con la responsabilidad jurídica que trae la concepción, diseño y construcción de proyectos en zonas inundables. El presente trabajo de investigación, se centra en identificar los riesgos, aplicando soluciones estructurales y recomendaciones resilientes para edificaciones que se encuentren emplazadas en zonas inundables. Disminuyendo así el riesgo de fallo estructural y el número de víctimas considerablemente. Concluyendo con un Catálogo de Riesgos y Soluciones para edificaciones en zonas inundables. Risk management should be understood as a determination of links between what is assumed to be vulnerable , and how that would be determined or would estimate the probability in the occurrence of a certain event, based on the idea of the occurrence of a phenomenon and necessary actions to be carried out . The issue of vulnerability and risk, every day takes more importance globally, as time passes is more notorious vulnerability of certain populations in the presence of certain natural hazards such as floods, swollen rivers, landslides and earthquakes. Vulnerability increases as it grows deforestation. The construction in high-risk locations, such as homes on the banks of rivers, is conditioned by the location and conditions of land use, infrastructure, construction, housing, distribution and population density, organizational skills, etc. Now where risk management plays a very important role in modern society, is being increasingly demanding with the results and quality of products and services, and also comply with the legal responsibility that brings the conception, design and construction projects in flood zones. This research focuses on identifying risks, implementing structural solutions and resilients’ recommendations for buildings that are emplaced in flood zones. Thus decreasing the risk of structural failure and the number of victims significantly. Concluding with a Catalogue of Risks and Solutions for buildings in flood zones.
Resumo:
Antecedentes Europa vive una situación insostenible. Desde el 2008 se han reducido los recursos de los gobiernos a raíz de la crisis económica. El continente Europeo envejece con ritmo constante al punto que se prevé que en 2050 habrá sólo dos trabajadores por jubilado [54]. A esta situación se le añade el aumento de la incidencia de las enfermedades crónicas, relacionadas con el envejecimiento, cuyo coste puede alcanzar el 7% del PIB de un país [51]. Es necesario un cambio de paradigma. Una nueva manera de cuidar de la salud de las personas: sustentable, eficaz y preventiva más que curativa. Algunos estudios abogan por el cuidado personalizado de la salud (pHealth). En este modelo las prácticas médicas son adaptadas e individualizadas al paciente, desde la detección de los factores de riesgo hasta la personalización de los tratamientos basada en la respuesta del individuo [81]. El cuidado personalizado de la salud está asociado a menudo al uso de las tecnologías de la información y comunicación (TICs) que, con su desarrollo exponencial, ofrecen oportunidades interesantes para la mejora de la salud. El cambio de paradigma hacia el pHealth está lentamente ocurriendo, tanto en el ámbito de la investigación como en la industria, pero todavía no de manera significativa. Existen todavía muchas barreras relacionadas a la economía, a la política y la cultura. También existen barreras puramente tecnológicas, como la falta de sistemas de información interoperables [199]. A pesar de que los aspectos de interoperabilidad están evolucionando, todavía hace falta un diseño de referencia especialmente direccionado a la implementación y el despliegue en gran escala de sistemas basados en pHealth. La presente Tesis representa un intento de organizar la disciplina de la aplicación de las TICs al cuidado personalizado de la salud en un modelo de referencia, que permita la creación de plataformas de desarrollo de software para simplificar tareas comunes de desarrollo en este dominio. Preguntas de investigación RQ1 >Es posible definir un modelo, basado en técnicas de ingeniería del software, que represente el dominio del cuidado personalizado de la salud de una forma abstracta y representativa? RQ2 >Es posible construir una plataforma de desarrollo basada en este modelo? RQ3 >Esta plataforma ayuda a los desarrolladores a crear sistemas pHealth complejos e integrados? Métodos Para la descripción del modelo se adoptó el estándar ISO/IEC/IEEE 42010por ser lo suficientemente general y abstracto para el amplio enfoque de esta tesis [25]. El modelo está definido en varias partes: un modelo conceptual, expresado a través de mapas conceptuales que representan las partes interesadas (stakeholders), los artefactos y la información compartida; y escenarios y casos de uso para la descripción de sus funcionalidades. El modelo fue desarrollado de acuerdo a la información obtenida del análisis de la literatura, incluyendo 7 informes industriales y científicos, 9 estándares, 10 artículos en conferencias, 37 artículos en revistas, 25 páginas web y 5 libros. Basándose en el modelo se definieron los requisitos para la creación de la plataforma de desarrollo, enriquecidos por otros requisitos recolectados a través de una encuesta realizada a 11 ingenieros con experiencia en la rama. Para el desarrollo de la plataforma, se adoptó la metodología de integración continua [74] que permitió ejecutar tests automáticos en un servidor y también desplegar aplicaciones en una página web. En cuanto a la metodología utilizada para la validación se adoptó un marco para la formulación de teorías en la ingeniería del software [181]. Esto requiere el desarrollo de modelos y proposiciones que han de ser validados dentro de un ámbito de investigación definido, y que sirvan para guiar al investigador en la búsqueda de la evidencia necesaria para justificarla. La validación del modelo fue desarrollada mediante una encuesta online en tres rondas con un número creciente de invitados. El cuestionario fue enviado a 134 contactos y distribuido en algunos canales públicos como listas de correo y redes sociales. El objetivo era evaluar la legibilidad del modelo, su nivel de cobertura del dominio y su potencial utilidad en el diseño de sistemas derivados. El cuestionario incluía preguntas cuantitativas de tipo Likert y campos para recolección de comentarios. La plataforma de desarrollo fue validada en dos etapas. En la primera etapa se utilizó la plataforma en un experimento a pequeña escala, que consistió en una sesión de entrenamiento de 12 horas en la que 4 desarrolladores tuvieron que desarrollar algunos casos de uso y reunirse en un grupo focal para discutir su uso. La segunda etapa se realizó durante los tests de un proyecto en gran escala llamado HeartCycle [160]. En este proyecto un equipo de diseñadores y programadores desarrollaron tres aplicaciones en el campo de las enfermedades cardio-vasculares. Una de estas aplicaciones fue testeada en un ensayo clínico con pacientes reales. Al analizar el proyecto, el equipo de desarrollo se reunió en un grupo focal para identificar las ventajas y desventajas de la plataforma y su utilidad. Resultados Por lo que concierne el modelo que describe el dominio del pHealth, la parte conceptual incluye una descripción de los roles principales y las preocupaciones de los participantes, un modelo de los artefactos TIC que se usan comúnmente y un modelo para representar los datos típicos que son necesarios formalizar e intercambiar entre sistemas basados en pHealth. El modelo funcional incluye un conjunto de 18 escenarios, repartidos en: punto de vista de la persona asistida, punto de vista del cuidador, punto de vista del desarrollador, punto de vista de los proveedores de tecnologías y punto de vista de las autoridades; y un conjunto de 52 casos de uso repartidos en 6 categorías: actividades de la persona asistida, reacciones del sistema, actividades del cuidador, \engagement" del usuario, actividades del desarrollador y actividades de despliegue. Como resultado del cuestionario de validación del modelo, un total de 65 personas revisó el modelo proporcionando su nivel de acuerdo con las dimensiones evaluadas y un total de 248 comentarios sobre cómo mejorar el modelo. Los conocimientos de los participantes variaban desde la ingeniería del software (70%) hasta las especialidades médicas (15%), con declarado interés en eHealth (24%), mHealth (16%), Ambient Assisted Living (21%), medicina personalizada (5%), sistemas basados en pHealth (15%), informática médica (10%) e ingeniería biomédica (8%) con una media de 7.25_4.99 años de experiencia en estas áreas. Los resultados de la encuesta muestran que los expertos contactados consideran el modelo fácil de leer (media de 1.89_0.79 siendo 1 el valor más favorable y 5 el peor), suficientemente abstracto (1.99_0.88) y formal (2.13_0.77), con una cobertura suficiente del dominio (2.26_0.95), útil para describir el dominio (2.02_0.7) y para generar sistemas más específicos (2_0.75). Los expertos también reportan un interés parcial en utilizar el modelo en su trabajo (2.48_0.91). Gracias a sus comentarios, el modelo fue mejorado y enriquecido con conceptos que faltaban, aunque no se pudo demonstrar su mejora en las dimensiones evaluadas, dada la composición diferente de personas en las tres rondas de evaluación. Desde el modelo, se generó una plataforma de desarrollo llamada \pHealth Patient Platform (pHPP)". La plataforma desarrollada incluye librerías, herramientas de programación y desarrollo, un tutorial y una aplicación de ejemplo. Se definieron cuatro módulos principales de la arquitectura: el Data Collection Engine, que permite abstraer las fuentes de datos como sensores o servicios externos, mapeando los datos a bases de datos u ontologías, y permitiendo interacción basada en eventos; el GUI Engine, que abstrae la interfaz de usuario en un modelo de interacción basado en mensajes; y el Rule Engine, que proporciona a los desarrolladores un medio simple para programar la lógica de la aplicación en forma de reglas \if-then". Después de que la plataforma pHPP fue utilizada durante 5 años en el proyecto HeartCycle, 5 desarrolladores fueron reunidos en un grupo de discusión para analizar y evaluar la plataforma. De estas evaluaciones se concluye que la plataforma fue diseñada para encajar las necesidades de los ingenieros que trabajan en la rama, permitiendo la separación de problemas entre las distintas especialidades, y simplificando algunas tareas de desarrollo como el manejo de datos y la interacción asíncrona. A pesar de ello, se encontraron algunos defectos a causa de la inmadurez de algunas tecnologías empleadas, y la ausencia de algunas herramientas específicas para el dominio como el procesado de datos o algunos protocolos de comunicación relacionados con la salud. Dentro del proyecto HeartCycle la plataforma fue utilizada para el desarrollo de la aplicación \Guided Exercise", un sistema TIC para la rehabilitación de pacientes que han sufrido un infarto del miocardio. El sistema fue testeado en un ensayo clínico randomizado en el cual a 55 pacientes se les dio el sistema para su uso por 21 semanas. De los resultados técnicos del ensayo se puede concluir que, a pesar de algunos errores menores prontamente corregidos durante el estudio, la plataforma es estable y fiable. Conclusiones La investigación llevada a cabo en esta Tesis y los resultados obtenidos proporcionan las respuestas a las tres preguntas de investigación que motivaron este trabajo: RQ1 Se ha desarrollado un modelo para representar el dominio de los sistemas personalizados de salud. La evaluación hecha por los expertos de la rama concluye que el modelo representa el dominio con precisión y con un balance apropiado entre abstracción y detalle. RQ2 Se ha desarrollado, con éxito, una plataforma de desarrollo basada en el modelo. RQ3 Se ha demostrado que la plataforma es capaz de ayudar a los desarrolladores en la creación de software pHealth complejos. Las ventajas de la plataforma han sido demostradas en el ámbito de un proyecto de gran escala, aunque el enfoque genérico adoptado indica que la plataforma podría ofrecer beneficios también en otros contextos. Los resultados de estas evaluaciones ofrecen indicios de que, ambos, el modelo y la plataforma serán buenos candidatos para poderse convertir en una referencia para futuros desarrollos de sistemas pHealth. ABSTRACT Background Europe is living in an unsustainable situation. The economic crisis has been reducing governments' economic resources since 2008 and threatening social and health systems, while the proportion of older people in the European population continues to increase so that it is foreseen that in 2050 there will be only two workers per retiree [54]. To this situation it should be added the rise, strongly related to age, of chronic diseases the burden of which has been estimated to be up to the 7% of a country's gross domestic product [51]. There is a need for a paradigm shift, the need for a new way of caring for people's health, shifting the focus from curing conditions that have arisen to a sustainable and effective approach with the emphasis on prevention. Some advocate the adoption of personalised health care (pHealth), a model where medical practices are tailored to the patient's unique life, from the detection of risk factors to the customization of treatments based on each individual's response [81]. Personalised health is often associated to the use of Information and Communications Technology (ICT), that, with its exponential development, offers interesting opportunities for improving healthcare. The shift towards pHealth is slowly taking place, both in research and in industry, but the change is not significant yet. Many barriers still exist related to economy, politics and culture, while others are purely technological, like the lack of interoperable information systems [199]. Though interoperability aspects are evolving, there is still the need of a reference design, especially tackling implementation and large scale deployment of pHealth systems. This thesis contributes to organizing the subject of ICT systems for personalised health into a reference model that allows for the creation of software development platforms to ease common development issues in the domain. Research questions RQ1 Is it possible to define a model, based on software engineering techniques, for representing the personalised health domain in an abstract and representative way? RQ2 Is it possible to build a development platform based on this model? RQ3 Does the development platform help developers create complex integrated pHealth systems? Methods As method for describing the model, the ISO/IEC/IEEE 42010 framework [25] is adopted for its generality and high level of abstraction. The model is specified in different parts: a conceptual model, which makes use of concept maps, for representing stakeholders, artefacts and shared information, and in scenarios and use cases for the representation of the functionalities of pHealth systems. The model was derived from literature analysis, including 7 industrial and scientific reports, 9 electronic standards, 10 conference proceedings papers, 37 journal papers, 25 websites and 5 books. Based on the reference model, requirements were drawn for building the development platform enriched with a set of requirements gathered in a survey run among 11 experienced engineers. For developing the platform, the continuous integration methodology [74] was adopted which allowed to perform automatic tests on a server and also to deploy packaged releases on a web site. As a validation methodology, a theory building framework for SW engineering was adopted from [181]. The framework, chosen as a guide to find evidence for justifying the research questions, imposed the creation of theories based on models and propositions to be validated within a scope. The validation of the model was conducted as an on-line survey in three validation rounds, encompassing a growing number of participants. The survey was submitted to 134 experts of the field and on some public channels like relevant mailing lists and social networks. Its objective was to assess the model's readability, its level of coverage of the domain and its potential usefulness in the design of actual, derived systems. The questionnaires included quantitative Likert scale questions and free text inputs for comments. The development platform was validated in two scopes. As a small-scale experiment, the platform was used in a 12 hours training session where 4 developers had to perform an exercise consisting in developing a set of typical pHealth use cases At the end of the session, a focus group was held to identify benefits and drawbacks of the platform. The second validation was held as a test-case study in a large scale research project called HeartCycle the aim of which was to develop a closed-loop disease management system for heart failure and coronary heart disease patients [160]. During this project three applications were developed by a team of programmers and designers. One of these applications was tested in a clinical trial with actual patients. At the end of the project, the team was interviewed in a focus group to assess the role the platform had within the project. Results For what regards the model that describes the pHealth domain, its conceptual part includes a description of the main roles and concerns of pHealth stakeholders, a model of the ICT artefacts that are commonly adopted and a model representing the typical data that need to be formalized among pHealth systems. The functional model includes a set of 18 scenarios, divided into assisted person's view, caregiver's view, developer's view, technology and services providers' view and authority's view, and a set of 52 Use Cases grouped in 6 categories: assisted person's activities, system reactions, caregiver's activities, user engagement, developer's activities and deployer's activities. For what concerns the validation of the model, a total of 65 people participated in the online survey providing their level of agreement in all the assessed dimensions and a total of 248 comments on how to improve and complete the model. Participants' background spanned from engineering and software development (70%) to medical specialities (15%), with declared interest in the fields of eHealth (24%), mHealth (16%), Ambient Assisted Living (21%), Personalized Medicine (5%), Personal Health Systems (15%), Medical Informatics (10%) and Biomedical Engineering (8%) with an average of 7.25_4.99 years of experience in these fields. From the analysis of the answers it is possible to observe that the contacted experts considered the model easily readable (average of 1.89_0.79 being 1 the most favourable scoring and 5 the worst), sufficiently abstract (1.99_0.88) and formal (2.13_0.77) for its purpose, with a sufficient coverage of the domain (2.26_0.95), useful for describing the domain (2.02_0.7) and for generating more specific systems (2_0.75) and they reported a partial interest in using the model in their job (2.48_0.91). Thanks to their comments, the model was improved and enriched with concepts that were missing at the beginning, nonetheless it was not possible to prove an improvement among the iterations, due to the diversity of the participants in the three rounds. From the model, a development platform for the pHealth domain was generated called pHealth Patient Platform (pHPP). The platform includes a set of libraries, programming and deployment tools, a tutorial and a sample application. The main four modules of the architecture are: the Data Collection Engine, which allows abstracting sources of information like sensors or external services, mapping data to databases and ontologies, and allowing event-based interaction and filtering, the GUI Engine, which abstracts the user interface in a message-like interaction model, the Workow Engine, which allows programming the application's user interaction ows with graphical workows, and the Rule Engine, which gives developers a simple means for programming the application's logic in the form of \if-then" rules. After the 5 years experience of HeartCycle, partially programmed with pHPP, 5 developers were joined in a focus group to discuss the advantages and drawbacks of the platform. The view that emerged from the training course and the focus group was that the platform is well-suited to the needs of the engineers working in the field, it allowed the separation of concerns among the different specialities and it simplified some common development tasks like data management and asynchronous interaction. Nevertheless, some deficiencies were pointed out in terms of a lack of maturity of some technological choices, and for the absence of some domain-specific tools, e.g. for data processing or for health-related communication protocols. Within HeartCycle, the platform was used to develop part of the Guided Exercise system, a composition of ICT tools for the physical rehabilitation of patients who suffered from myocardial infarction. The system developed using the platform was tested in a randomized controlled clinical trial, in which 55 patients used the system for 21 weeks. The technical results of this trial showed that the system was stable and reliable. Some minor bugs were detected, but these were promptly corrected using the platform. This shows that the platform, as well as facilitating the development task, can be successfully used to produce reliable software. Conclusions The research work carried out in developing this thesis provides responses to the three three research questions that were the motivation for the work. RQ1 A model was developed representing the domain of personalised health systems, and the assessment of experts in the field was that it represents the domain accurately, with an appropriate balance between abstraction and detail. RQ2 A development platform based on the model was successfully developed. RQ3 The platform has been shown to assist developers create complex pHealth software. This was demonstrated within the scope of one large-scale project, but the generic approach adopted provides indications that it would offer benefits more widely. The results of these evaluations provide indications that both the model and the platform are good candidates for being a reference for future pHealth developments.
Resumo:
Experiments with simulators allow psychologists to better understand the causes of human errors and build models of cognitive processes to be used in human reliability assessment (HRA). This paper investigates an approach to task failure analysis based on patterns of behaviour, by contrast to more traditional event-based approaches. It considers, as a case study, a formal model of an air traffic control (ATC) system which incorporates controller behaviour. The cognitive model is formalised in the CSP process algebra. Patterns of behaviour are expressed as temporal logic properties. Then a model-checking technique is used to verify whether the decomposition of the operator's behaviour into patterns is sound and complete with respect to the cognitive model. The decomposition is shown to be incomplete and a new behavioural pattern is identified, which appears to have been overlooked in the analysis of the data provided by the experiments with the simulator. This illustrates how formal analysis of operator models can yield fresh insights into how failures may arise in interactive systems.
Resumo:
This paper introduces responsive systems: systems that are real-time, event-based, or time-dependent. There are a number of trends that are accelerating the adoption of responsive systems: timeliness requirements for business information systems are becoming more prevalent, embedded systems are increasingly integrated into soft real-time command-and-control systems, improved message-oriented middleware is facilitating growth in event-processing applications, and advances in service-oriented and component-based techniques are lowering the costs of developing and deploying responsive applications. The use of responsive systems is illustrated here in two application areas: the defense industry and online gaming. The papers in this special issue of the IBM Systems Journal are then introduced. The paper concludes with a discussion of the key remaining challenges in this area and ideas for further work.
Resumo:
There is an increasing emphasis on the use of software to control safety critical plants for a wide area of applications. The importance of ensuring the correct operation of such potentially hazardous systems points to an emphasis on the verification of the system relative to a suitably secure specification. However, the process of verification is often made more complex by the concurrency and real-time considerations which are inherent in many applications. A response to this is the use of formal methods for the specification and verification of safety critical control systems. These provide a mathematical representation of a system which permits reasoning about its properties. This thesis investigates the use of the formal method Communicating Sequential Processes (CSP) for the verification of a safety critical control application. CSP is a discrete event based process algebra which has a compositional axiomatic semantics that supports verification by formal proof. The application is an industrial case study which concerns the concurrent control of a real-time high speed mechanism. It is seen from the case study that the axiomatic verification method employed is complex. It requires the user to have a relatively comprehensive understanding of the nature of the proof system and the application. By making a series of observations the thesis notes that CSP possesses the scope to support a more procedural approach to verification in the form of testing. This thesis investigates the technique of testing and proposes the method of Ideal Test Sets. By exploiting the underlying structure of the CSP semantic model it is shown that for certain processes and specifications the obligation of verification can be reduced to that of testing the specification over a finite subset of the behaviours of the process.
Resumo:
The study of gender differences in prospective memory (i.e., remembering to remember) has received modest attention in the literature. The few reported studies investigating either subjective or objective evaluations of prospective memory have shown inconsistent data. In this study, we aimed to verify the presence of gender differences during the performance of an objective prospective memory test by considering the weight of specific variables such as length of delay, type of response, and type of cue. We submitted a sample of 100 healthy Italian participants (50 men and 50 women) to a test expressly developed to assess prospective memory: The Memory for Intentions Screening Test. Women performed better than men in remembering to do an event-based task (i.e., prompted by an external event) and when the task required a physical response modality. We discuss the behavioural differences that emerged by considering the possible role of sociological, biological, neuroanatomical, and methodological variables.
Resumo:
Data integration for the purposes of tracking, tracing and transparency are important challenges in the agri-food supply chain. The Electronic Product Code Information Services (EPCIS) is an event-oriented GS1 standard that aims to enable tracking and tracing of products through the sharing of event-based datasets that encapsulate the Electronic Product Code (EPC). In this paper, the authors propose a framework that utilises events and EPCs in the generation of "linked pedigrees" - linked datasets that enable the sharing of traceability information about products as they move along the supply chain. The authors exploit two ontology based information models, EEM and CBVVocab within a distributed and decentralised framework that consumes real time EPCIS events as linked data to generate the linked pedigrees. The authors exemplify the usage of linked pedigrees within the fresh fruit and vegetables supply chain in the agri-food sector.