933 resultados para abstract data type
Resumo:
snBench is a platform on which novice users compose and deploy distributed Sense and Respond programs for simultaneous execution on a shared, distributed infrastructure. It is a natural imperative that we have the ability to (1) verify the safety/correctness of newly submitted tasks and (2) derive the resource requirements for these tasks such that correct allocation may occur. To achieve these goals we have established a multi-dimensional sized type system for our functional-style Domain Specific Language (DSL) called Sensor Task Execution Plan (STEP). In such a type system data types are annotated with a vector of size attributes (e.g., upper and lower size bounds). Tracking multiple size aspects proves essential in a system in which Images are manipulated as a first class data type, as image manipulation functions may have specific minimum and/or maximum resolution restrictions on the input they can correctly process. Through static analysis of STEP instances we not only verify basic type safety and establish upper computational resource bounds (i.e., time and space), but we also derive and solve data and resource sizing constraints (e.g., Image resolution, camera capabilities) from the implicit constraints embedded in program instances. In fact, the static methods presented here have benefit beyond their application to Image data, and may be extended to other data types that require tracking multiple dimensions (e.g., image "quality", video frame-rate or aspect ratio, audio sampling rate). In this paper we present the syntax and semantics of our functional language, our type system that builds costs and resource/data constraints, and (through both formalism and specific details of our implementation) provide concrete examples of how the constraints and sizing information are used in practice.
Resumo:
Information systems are widespread and used by anyone with computing devices as well as corporations and governments. It is often the case that security leaks are introduced during the development of an application. Reasons for these security bugs are multiple but among them one can easily identify that it is very hard to define and enforce relevant security policies in modern software. This is because modern applications often rely on container sharing and multi-tenancy where, for instance, data can be stored in the same physical space but is logically mapped into different security compartments or data structures. In turn, these security compartments, to which data is classified into in security policies, can also be dynamic and depend on runtime data. In this thesis we introduce and develop the novel notion of dependent information flow types, and focus on the problem of ensuring data confidentiality in data-centric software. Dependent information flow types fit within the standard framework of dependent type theory, but, unlike usual dependent types, crucially allow the security level of a type, rather than just the structural data type itself, to depend on runtime values. Our dependent function and dependent sum information flow types provide a direct, natural and elegant way to express and enforce fine grained security policies on programs. Namely programs that manipulate structured data types in which the security level of a structure field may depend on values dynamically stored in other fields The main contribution of this work is an efficient analysis that allows programmers to verify, during the development phase, whether programs have information leaks, that is, it verifies whether programs protect the confidentiality of the information they manipulate. As such, we also implemented a prototype typechecker that can be found at http://ctp.di.fct.unl.pt/DIFTprototype/.
Resumo:
Knowledge discovery in databases is the non-trivial process of identifying valid, novel potentially useful and ultimately understandable patterns from data. The term Data mining refers to the process which does the exploratory analysis on the data and builds some model on the data. To infer patterns from data, data mining involves different approaches like association rule mining, classification techniques or clustering techniques. Among the many data mining techniques, clustering plays a major role, since it helps to group the related data for assessing properties and drawing conclusions. Most of the clustering algorithms act on a dataset with uniform format, since the similarity or dissimilarity between the data points is a significant factor in finding out the clusters. If a dataset consists of mixed attributes, i.e. a combination of numerical and categorical variables, a preferred approach is to convert different formats into a uniform format. The research study explores the various techniques to convert the mixed data sets to a numerical equivalent, so as to make it equipped for applying the statistical and similar algorithms. The results of clustering mixed category data after conversion to numeric data type have been demonstrated using a crime data set. The thesis also proposes an extension to the well known algorithm for handling mixed data types, to deal with data sets having only categorical data. The proposed conversion has been validated on a data set corresponding to breast cancer. Moreover, another issue with the clustering process is the visualization of output. Different geometric techniques like scatter plot, or projection plots are available, but none of the techniques display the result projecting the whole database but rather demonstrate attribute-pair wise analysis
Resumo:
O Diabetes Mellitus tipo 1 (DM1) é a endocrinopatia mais comum da infância e adolescência e impacta negativamente na qualidade de vida (QV). O EuroQol é um instrumento que afere o estado de saúde e vem sendo utilizado na grande maioria dos estudos multicêntricos mundiais em diabetes e tem se mostrado uma ferramenta extremamente útil e confiável. O objetivo desse estudo é avaliar a QV de pacientes com DM1 do Brasil, país de proporções continentais, por meio da análise do EuroQol. Para isso, realizou-se estudo retrospectivo e transversal, no qual foram analisados questionários de pacientes com DM1, respondidos no período de dezembro de 2008 a dezembro de 2010, em 28 centros de pesquisa de 20 cidades das quatro regiões do país (sudeste, norte/nordeste, sul e centro-oeste). Foram também coletados dados sobre complicações crônicas micro e macrovasculares e perfil lipídico. A avaliação da qualidade de vida pelo EuroQol mostra que a nota média atribuída ao estado geral de saúde é nitidamente menor que a encontrada em dois outros estudos populacionais com DM1 realizados na Europa (EQ-VAS da Alemanha, Holanda e Brasil foram de 82,1 ± 14; 81 ± 15 e 72 ± 22, respectivamente). O EuroQol demonstra que a região Norte-Nordeste apresenta melhor índice na avaliação do estado geral de saúde quando comparada a região Sudeste e menor frequência de ansiedade-depressão autorreferidas, quando comparada às demais regiões do país (Norte-Nordeste = 1,53 ± 0,6, Sudeste = 1,65 ± 0,7, Sul = 1,72 ± 0,7 e Centro-Oeste = 1,67 ± 0,7; p <0,05). Adicionalmente, diversas variáveis conhecidas (idade, duração do DM, prática de atividade física, HbA1c, glicemia de jejum e presença de complicações crônicas se correlacionaram com a QV (r = -0,1, p <0,05; r = -0,1, p <0,05; r = -0,1, p <0,05; r = -0,2, p <0,05; r = -0,1, p <0,05 e r= -0,1, p <0,05, respectivamente). Esse é o primeiro estudo a avaliar a qualidade de vida de pacientes com DM1 a nível populacional no hemisfério sul. Nossos dados indicam uma pior qualidade de vida dos pacientes com DM 1 no Brasil quando comparado a dados de países europeus. Apesar de ter sido encontrado uma inferior duração do DM e menor presença de complicações microvasculares na região Norte/ Nordeste, quando comparada à outras regiões, nossos dados sugerem a existência de elementos adicionais responsáveis pela melhor QV e menor presença de ansiedade/depressão encontradas nesta região. Novos estudos são necessários para identificar esses possíveis fatores.
Resumo:
O desmatamento é um processo evidente na Amazônia oriundo da ação antrópica predatória dos recursos naturais. A extração madeireira e a agropecuária são as principais atividades que tem promovido a destruição da floresta no Arco do desmatamento. Entretanto, o reflorestamento tem sido o foco de políticas públicas que o Governo tem desenvolvido por meio do Programa Arco Verde. No Pará este projeto está sendo aplicado em 16 municípios que integram as áreas críticas de desmatamento devido às pressões antrópicas exercidas. Nesse contexto, os sistemas agroflorestais tem sido uma das alternativas para reflorestamento dessas áreas. Neste trabalho objetivou-se a identificação de áreas preferenciais para plantio de 15 espécies florestais potenciais para uso em sistema agroflorestais. A partir do mapeamento da ocorrência das espécies florestais selecionadas, e do cruzamento de dados geográficos de tipologia climática e deficiência hídrica, identificou-se 24 zonas bioclimáticas no Arco Verde paraense. Os resultados para o plantio das espécies florestais em áreas preferenciais foram: J. copaia, T. serratifolia e B. excelsa são potenciais para serem plantadas em 100% do Arco Verde Paraense; C. pentandra, H. courbaril, S. morototoni e T. vulgaris são indicadas para serem plantadas em 98% da área alvo; C. odorata, C. goeldiana, D. odorata, S. macrophylla são indicadas para serem inseridas em 75% do Arco Verde paraense; C. guianensis, S. parahyba var. amazonicum, B. guianensis e V. maxima em 60% da área estudada. Em suma, é necessário se intensificar estudos em espécies florestais que são indicadas para as áreas preferenciais mais abrangentes.
Resumo:
Approximately one-third of US adults have metabolic syndrome, the clustering of cardiovascular risk factors that include hypertension, abdominal adiposity, elevated fasting glucose, low high-density lipoprotein (HDL)-cholesterol and elevated triglyceride levels. While the definition of metabolic syndrome continues to be much debated among leading health research organizations, the fact is that individuals with metabolic syndrome have an increased risk of developing cardiovascular disease and/or type 2 diabetes. A recent report by the Henry J. Kaiser Family Foundation found that the US spent $2.2 trillion (16.2% of the Gross Domestic Product) on healthcare in 2007 and cited that among other factors, chronic diseases, including type 2 diabetes and cardiovascular disease, are large contributors to this growing national expenditure. Bearing a substantial portion of this cost are employers, the leading providers of health insurance. In lieu of this, many employers have begun implementing health promotion efforts to counteract these rising costs. However, evidence-based practices, uniform guidelines and policy do not exist for this setting in regard to the prevention of metabolic syndrome risk factors as defined by the National Cholesterol Education Program (NCEP) Adult Treatment Panel III (ATP III). Therefore, the aim of this review was to determine the effects of worksite-based behavior change programs on reducing the risk factors for metabolic syndrome in adults. Using relevant search terms, OVID MEDLINE was used to search the peer-reviewed literature published since 1998, resulting in 23 articles meeting the inclusion criteria for the review. The American Dietetic Association's Evidence Analysis Process was used to abstract data from selected articles, assess the quality of each study, compile the evidence, develop a summarized conclusion, and assign a grade based upon the strength of supporting evidence. The results revealed that participating in a worksite-based behavior change program may be associated in one or more improved metabolic syndrome risk factors. Programs that delivered a higher dose (>22 hours), in a shorter duration (<2 years) using two or more behavior-change strategies were associated with more metabolic risk factors being positively impacted. A Conclusion Grade of III was obtained for the evidence, indicating that studies were of weak design or results were inconclusive due to inadequate sample sizes, bias and lack of generalizability. These results provide some support for the continued use of worksite-based health promotion and further research is needed to determine if multi-strategy, intense behavior change programs targeting multiple risk factors are able to sustain health improvements in the long-term.^
Resumo:
Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.
Resumo:
En este proyecto se realiza el diseño e implementación de un sistema que detecta anomalías en las entradas de entornos controlados. Para ello, se hace uso de las últimas técnicas en visión por computador y se avisa visual y auditivamente, mediante un sistema hardware que recibe señales del ordenador al que está conectado. Se marca y fotografía, a una o varias personas, que cometen una infracción en las entradas de un establecimiento, vigilado con sistemas de vídeo. Las imágenes se almacenan en las carpetas correspondientes. El sistema diseñado es colaborativo, por lo tanto, las cámaras que intervienen, se comunican entre ellas a través de estructuras de datos con el objetivo de intercambiar información. Además, se utiliza conexión inalámbrica desde un dispositivo móvil para obtener una visión global del entorno desde cualquier lugar del mundo. La aplicación se desarrolla en el entorno MATLAB, que permite un tratamiento de la señal de imagen apropiado para el presente proyecto. Asimismo, se proporciona al usuario una interfaz gráfica con la que interactuar de manera sencilla, evitando así, el cambio de parámetros en la estructura interna del programa cuando se quiere variar el entorno o el tipo de adquisición de datos. El lenguaje que se escoge facilita la ejecución en distintos sistemas operativos, incluyendo Windows o iOS y, de esta manera, se proporciona flexibilidad. ABSTRACT. This project studies the design and implementation of a system that detects any anomalies on the entrances to controlled environments. To this end, it is necessary the use of last techniques in computer vision in order to notify visually and aurally, by a hardware system which receives signs from the computer it is connected to. One or more people that commit an infringement while entering into a secured environment, with video systems, are marked and photographed and those images are stored in their belonging file folder. This is a collaborative design system, therefore, every involved camera communicates among themselves through data structures with the purpose of exchanging information. Furthermore, to obtain a global environment vision from any place in the world it uses a mobile wireless connection. The application is developed in MATLAB environment because it allows an appropriate treatment of the image signal for this project. In addition, the user is given a graphical interface to easily interact, avoiding with this, changing any parameters on the program’s intern structure, when it requires modifying the environment or the data type acquisition. The chosen language eases its execution in different operating systems, including Windows or iOS, providing flexibility.
Resumo:
Substantial altimetry datasets collected by different satellites have only become available during the past five years, but the future will bring a variety of new altimetry missions, both parallel and consecutive in time. The characteristics of each produced dataset vary with the different orbital heights and inclinations of the spacecraft, as well as with the technical properties of the radar instrument. An integral analysis of datasets with different properties offers advantages both in terms of data quantity and data quality. This thesis is concerned with the development of the means for such integral analysis, in particular for dynamic solutions in which precise orbits for the satellites are computed simultaneously. The first half of the thesis discusses the theory and numerical implementation of dynamic multi-satellite altimetry analysis. The most important aspect of this analysis is the application of dual satellite altimetry crossover points as a bi-directional tracking data type in simultaneous orbit solutions. The central problem is that the spatial and temporal distributions of the crossovers are in conflict with the time-organised nature of traditional solution methods. Their application to the adjustment of the orbits of both satellites involved in a dual crossover therefore requires several fundamental changes of the classical least-squares prediction/correction methods. The second part of the thesis applies the developed numerical techniques to the problems of precise orbit computation and gravity field adjustment, using the altimetry datasets of ERS-1 and TOPEX/Poseidon. Although the two datasets can be considered less compatible that those of planned future satellite missions, the obtained results adequately illustrate the merits of a simultaneous solution technique. In particular, the geographically correlated orbit error is partially observable from a dataset consisting of crossover differences between two sufficiently different altimetry datasets, while being unobservable from the analysis of altimetry data of both satellites individually. This error signal, which has a substantial gravity-induced component, can be employed advantageously in simultaneous solutions for the two satellites in which also the harmonic coefficients of the gravity field model are estimated.
Resumo:
The last decades have been characterized by a continuous adoption of IT solutions in the healthcare sector, which resulted in the proliferation of tremendous amounts of data over heterogeneous systems. Distinct data types are currently generated, manipulated, and stored, in the several institutions where patients are treated. The data sharing and an integrated access to this information will allow extracting relevant knowledge that can lead to better diagnostics and treatments. This thesis proposes new integration models for gathering information and extracting knowledge from multiple and heterogeneous biomedical sources. The scenario complexity led us to split the integration problem according to the data type and to the usage specificity. The first contribution is a cloud-based architecture for exchanging medical imaging services. It offers a simplified registration mechanism for providers and services, promotes remote data access, and facilitates the integration of distributed data sources. Moreover, it is compliant with international standards, ensuring the platform interoperability with current medical imaging devices. The second proposal is a sensor-based architecture for integration of electronic health records. It follows a federated integration model and aims to provide a scalable solution to search and retrieve data from multiple information systems. The last contribution is an open architecture for gathering patient-level data from disperse and heterogeneous databases. All the proposed solutions were deployed and validated in real world use cases.
Resumo:
Previous work by Professor John Frazer on Evolutionary Architecture provides a basis for the development of a system evolving architectural envelopes in a generic and abstract manner. Recent research by the authors has focused on the implementation of a virtual environment for the automatic generation and exploration of complex forms and architectural envelopes based on solid modelling techniques and the integration of evolutionary algorithms, enhanced computational and mathematical models. Abstract data types are introduced for genotypes in a genetic algorithm order to develop complex models using generative and evolutionary computing techniques. Multi-objective optimisation techniques are employed for defining the fitness function in the evaluation process.
Resumo:
AIM: To draw on empirical evidence to illustrate the core role of nurse practitioners in Australia and New Zealand. BACKGROUND: Enacted legislation provides for mutual recognition of qualifications, including nursing, between New Zealand and Australia. As the nurse practitioner role is relatively new in both countries, there is no consistency in role expectation and hence mutual recognition has not yet been applied to nurse practitioners. A study jointly commissioned by both countries' Regulatory Boards developed information on the core role of the nurse practitioner, to develop shared competency and educational standards. Reporting on this study's process and outcomes provides insights that are relevant both locally and internationally. METHOD: This interpretive study used multiple data sources, including published and grey literature, policy documents, nurse practitioner program curricula and interviews with 15 nurse practitioners from the two countries. Data were analysed according to the appropriate standard for each data type and included both deductive and inductive methods. The data were aggregated thematically according to patterns within and across the interview and material data. FINDINGS: The core role of the nurse practitioner was identified as having three components: dynamic practice, professional efficacy and clinical leadership. Nurse practitioner practice is dynamic and involves the application of high level clinical knowledge and skills in a wide range of contexts. The nurse practitioner demonstrates professional efficacy, enhanced by an extended range of autonomy that includes legislated privileges. The nurse practitioner is a clinical leader with a readiness and an obligation to advocate for their client base and their profession at the systems level of health care. CONCLUSION: A clearly articulated and research informed description of the core role of the nurse practitioner provides the basis for development of educational and practice competency standards. These research findings provide new perspectives to inform the international debate about this extended level of nursing practice. RELEVANCE TO CLINICAL PRACTICE: The findings from this research have the potential to achieve a standardised approach and internationally consistent nomenclature for the nurse practitioner role.
Resumo:
Background: Specialised disease management programmes for chronic heart failure (CHF) improve survival, quality of life and reduce healthcare utilisation. The overall efficacy of structured telephone support or telemonitoring as an individual component of a CHF disease management strategy remains inconclusive. Objectives: To review randomised controlled trials (RCTs) of structured telephone support or telemonitoring compared to standard practice for patients with CHF in order to quantify the effects of these interventions over and above usual care for these patients. Search strategy: Databases (the Cochrane Central Register of Controlled Trials (CENTRAL), Database of Abstracts of Reviews of Effects (DARE) and Health Technology Assessment Database (HTA) on The Cochrane Library, MEDLINE, EMBASE, CINAHL, AMED and Science Citation Index Expanded and Conference Citation Index on ISI Web of Knowledge) and various search engines were searched from 2006 to November 2008 to update a previously published non-Cochrane review. Bibliographies of relevant studies and systematic reviews and abstract conference proceedings were handsearched. No language limits were applied. Selection criteria: Only peer reviewed, published RCTs comparing structured telephone support or telemonitoring to usual care of CHF patients were included. Unpublished abstract data was included in sensitivity analyses. The intervention or usual care could not include a home visit or more than the usual (four to six weeks) clinic follow-up. Data collection and analysis: Data were presented as risk ratio (RR) with 95% confidence intervals (CI). Primary outcomes included all-cause mortality, all-cause and CHF-related hospitalisations which were meta-analysed using fixed effects models. Other outcomes included length of stay, quality of life, acceptability and cost and these were described and tabulated. Main results: Twenty-five studies and five published abstracts were included. Of the 25 full peer-reviewed studies meta-analysed, 16 evaluated structured telephone support (5613 participants), 11 evaluated telemonitoring (2710 participants), and two tested both interventions (included in counts). Telemonitoring reduced all-cause mortality (RR 0.66, 95% CI 0.54 to 0.81, P < 0.0001) with structured telephone support demonstrating a non-significant positive effect (RR 0.88, 95% CI 0.76 to 1.01, P = 0.08). Both structured telephone support (RR 0.77, 95% CI 0.68 to 0.87, P < 0.0001) and telemonitoring (RR 0.79, 95% CI 0.67 to 0.94, P = 0.008) reduced CHF-related hospitalisations. For both interventions, several studies improved quality of life, reduced healthcare costs and were acceptable to patients. Improvements in prescribing, patient knowledge and self-care, and New York Heart Association (NYHA) functional class were observed. Authors' conclusions: Structured telephone support and telemonitoring are effective in reducing the risk of all-cause mortality and CHF-related hospitalisations in patients with CHF; they improve quality of life, reduce costs, and evidence-based prescribing.
Resumo:
The 48 hour game making challenge has been running since 2007. In recent years, we have not only been running a 'game jam' for the local community but we have also been exploring the way in which the event itself and the place of the event has the potential to create its own stories. Game jams are the creative festivals of the game development community and a game jam is very much an event or performance; its stories are those of subjective experience. Participants return year after year and recount personal stories from previous challenges; arrival in the 48hr location typically inspires instances of individual memory and narration more in keeping with those of a music festival or an oft frequented holiday destination. Since its inception, the 48hr has been heavily documented, from the photo-blogging of our first jam and the twitter streams of more recent events to more formal interviews and documentaries (see Anderson, 2012). We have even had our own moments of Gonzo journalism with an on-site press room one year and an ‘embedded’ journalist another year (Keogh, 2011). In the last two years of the 48hr we have started to explore ways and means to collect more abstract data during the event, that is, empirical data about movement and activity. The intent behind this form of data collection was to explore graphic and computer generated visualisations of the event, not for the purpose of formal analysis but in the service of further story telling. [exerpt from truna aka j.turner, Thomas & Owen, 2013) See: truna aka j.turner, Thomas & Owen (2013) Living the indie life: mapping creative teams in a 48 hour game jam and playing with data, Proceedings of the 9th Australasian Conference on Interactive Entertainment, IE'2013, September 30 - October 01 2013, Melbourne, VIC, Australia
Resumo:
We present an approach to automatically de-identify health records. In our approach, personal health information is identified using a Conditional Random Fields machine learning classifier, a large set of linguistic and lexical features, and pattern matching techniques. Identified personal information is then removed from the reports. The de-identification of personal health information is fundamental for the sharing and secondary use of electronic health records, for example for data mining and disease monitoring. The effectiveness of our approach is first evaluated on the 2007 i2b2 Shared Task dataset, a widely adopted dataset for evaluating de-identification techniques. Subsequently, we investigate the robustness of the approach to limited training data; we study its effectiveness on different type and quality of data by evaluating the approach on scanned pathology reports from an Australian institution. This data contains optical character recognition errors, as well as linguistic conventions that differ from those contained in the i2b2 dataset, for example different date formats. The findings suggest that our approach compares to the best approach from the 2007 i2b2 Shared Task; in addition, the approach is found to be robust to variations of training size, data type and quality in presence of sufficient training data.