796 resultados para User-based collaborative filtering


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In November 2010, nearly 110,000 people in the United States were waiting for organs for transplantation. Despite the fact that the organ donor registration rate has doubled in the last year, Texas has the lowest registration rate in the nation. Due to the need for improved registration rates in Texas, this practice-based culminating experience was to write an application for federal funding for the central Texas organ procurement organization, Texas Organ Sharing Alliance. The culminating experience has two levels of significance for public health – (1) to engage in an activity to promote organ donation registration, and (2) to provide professional experience in grant writing. ^ The process began with a literature review. The review was to identify successful intervention activities in motivating organ donation registration that could be used in intervention design for the grant application. Conclusions derived from the literature review included (1) the need to specifically encourage family discussions, (2) religious and community leaders can be leveraged to facilitate organ donation conversations in families, (3) communication content must be culturally sensitive and (4) ethnic disparities in transplantation must be acknowledged and discussed.^ Post the literature review; the experience followed a five step process of developing the grant application. The steps included securing permission to proceed, assembling a project team, creation of a project plan and timeline, writing each element of the grant application including the design of proposed intervention activities, and completion of the federal grant application. ^ After the grant application was written, an evaluation of the grant writing process was conducted. Opportunities for improvement were identified. The first opportunity was the need for better timeline management to allow for review of the application by an independent party, iterative development of the budget proposal, and development of collaborative partnerships. Another improvement opportunity was the management of conflict regarding the design of the intervention that stemmed from marketing versus evidence-based approaches. The most important improvement opportunity was the need to develop a more exhaustive evaluation plan.^ Eight supplementary files are attached to appendices: Feasibility Discussion in Appendix 1, Grant Guidance and Workshop Notes in Appendix 2, Presentation to Texas Organ Sharing Alliance in Appendix 3, Team Recruitment Presentation in Appendix 5, Grant Project Narrative in Appendix 7, Federal Application Form in Appendix 8, and Budget Workbook with Budget Narrative in Appendix 9.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Women With IMPACT (WWI) is a community-based preconception care educational intervention. WWI is being implemented by the Impacting Maternal and Prenatal Care Together (IMPACT) Collaborative and targets zip codes in Harris County, Texas at high risk for infant mortality, low birthweight, and preterm birth. WWI started March 2012 and continues through August 2013. Three workshop series are planned. This study was conducted with participants and facilitators from the first workshop series. This study aimed to 1) evaluate the WWI program using empowerment evaluation, 2) engage all WWI stakeholders in an empowerment evaluation so the method could be adopted as a participatory evaluation process for future IMPACT activities, and 3) develop recommendations for sustainability of the WWI intervention, based on empowerment evaluation findings and results from the pre/post program evaluation completed by WWI participants. Study participants included WWI participants and facilitators and IMPACT Collaborative Steering Committee members. WWI participants were female, 18-35 year-old, non-pregnant residents of zip codes at high risk of adverse birth outcomes. All other study participants were 18 years or older. A two-phased empowerment evaluation (EE) was utilized in this study. Sessions 1-4 were conducted independently of one another – 3 with participants at different sites and one with the facilitators. The fifth session included WWI participant and facilitator representatives, and IMPACT Steering Committee members. Session 5 built upon the work of the other sessions. Observation notes were recorded during each session. Thematic content analysis was conducted on all EE tables and observation notes. Mission statements drafted by each group focused on improvement of physical and mental health through behavior change and empowerment of all participants. The top 5 overall program components were: physical activity, nutrition, self-worth, in-class communication, and stress. Goals for program improvement were set by EE participants for each of these components. Through thematic content analysis of the tables and observation notes, social support emerged as an important theme of the program among all participant groups. Change to a healthy lifestyle emerged as an important theme in terms of program improvement. Two-phased EE provided an opportunity for all program stakeholders to provide feedback regarding important program components and provide suggestions for program improvement. EE, thematic content analysis, pre/post evaluation results, and inherent program knowledge were triangulated to make recommendations to sustain the program once the initial funding ends. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate calculation of absorbed dose to target tumors and normal tissues in the body is an important requirement for establishing fundamental dose-response relationships for radioimmunotherapy. Two major obstacles have been the difficulty in obtaining an accurate patient-specific 3-D activity map in-vivo and calculating the resulting absorbed dose. This study investigated a methodology for 3-D internal dosimetry, which integrates the 3-D biodistribution of the radionuclide acquired from SPECT with a dose-point kernel convolution technique to provide the 3-D distribution of absorbed dose. Accurate SPECT images were reconstructed with appropriate methods for noise filtering, attenuation correction, and Compton scatter correction. The SPECT images were converted into activity maps using a calibration phantom. The activity map was convolved with an $\sp{131}$I dose-point kernel using a 3-D fast Fourier transform to yield a 3-D distribution of absorbed dose. The 3-D absorbed dose map was then processed to provide the absorbed dose distribution in regions of interest. This methodology can provide heterogeneous distributions of absorbed dose in volumes of any size and shape with nonuniform distributions of activity. Comparison of the activities quantitated by our SPECT methodology to true activities in an Alderson abdominal phantom (with spleen, liver, and spherical tumor) yielded errors of $-$16.3% to 4.4%. Volume quantitation errors ranged from $-$4.0 to 5.9% for volumes greater than 88 ml. The percentage differences of the average absorbed dose rates calculated by this methodology and the MIRD S-values were 9.1% for liver, 13.7% for spleen, and 0.9% for the tumor. Good agreement (percent differences were less than 8%) was found between the absorbed dose due to penetrating radiation calculated from this methodology and TLD measurement. More accurate estimates of the 3-D distribution of absorbed dose can be used as a guide in specifying the minimum activity to be administered to patients to deliver a prescribed absorbed dose to tumor without exceeding the toxicity limits of normal tissues. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Institute of Medicine (IOM) report on the future of health care states that the focus on health needs to shift to the management and prevention of chronic illnesses and that academic health centers (AHCs) should play an active role in this process through community partnerships (IOM, 2002). Grant funding from the National Institutes of Health and the creation of the Centers for Disease Control and Prevention (CDC) Prevention Research Centers (PRC) across the county represent a transition toward more proactively seeking out community partnerships to better design and disseminate health promotion programs (Green, 2001). ^ The focus of the PRCs is to conduct rigorous, community-based, prevention research, to seek outcomes applicable to public health programs and policies. The PRCs work is to create and foster partnerships among public health and community organizations, to address health promotion and disease prevention issues (CDC, 2003). ^ The W.K. Kellogg Foundation defines CBPR as "a collaborative approach to research that equitably involves all partners in the research process and recognizes the unique strengths that each brings. CBPR begins with a research topic of importance to the community with the aim of combining knowledge and action for social change to improve community health." ^ In 1995, CDC asked the IOM to review the PRC program to examine the extent to which the program is providing the public health community with strategies to address public health problems in disease prevention and health promotion (IOM, 1997). No comprehensive evaluation n of the individual PRCs had ever been done (IOM, 1997). ^ The CDC was interested in understanding how it could better support the PRC program through improved management and oversight to influence the program's success. The CDC only represents one of the entities that influence the success of a PRC. Another key entity to consider is the support of and influence of the Schools of Public Health in which the PRCs reside. Using evaluation criteria similar to those that were developed by the IOM, this study examined how aspects of structural capacity of the Schools of Public Health in which the PRCs reside are perceived to influence PRC community-based research activities. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

State-of-the-art process-based models have shown to be applicable to the simulation and prediction of coastal morphodynamics. On annual to decadal temporal scales, these models may show limitations in reproducing complex natural morphological evolution patterns, such as the movement of bars and tidal channels, e.g. the observed decadal migration of the Medem Channel in the Elbe Estuary, German Bight. Here a morphodynamic model is shown to simulate the hydrodynamics and sediment budgets of the domain to some extent, but fails to adequately reproduce the pronounced channel migration, due to the insufficient implementation of bank erosion processes. In order to allow for long-term simulations of the domain, a nudging method has been introduced to update the model-predicted bathymetries with observations. The model-predicted bathymetry is nudged towards true states in annual time steps. Sensitivity analysis of a user-defined correlation length scale, for the definition of the background error covariance matrix during the nudging procedure, suggests that the optimal error correlation length is similar to the grid cell size, here 80-90 m. Additionally, spatially heterogeneous correlation lengths produce more realistic channel depths than do spatially homogeneous correlation lengths. Consecutive application of the nudging method compensates for the (stand-alone) model prediction errors and corrects the channel migration pattern, with a Brier skill score of 0.78. The proposed nudging method in this study serves as an analytical approach to update model predictions towards a predefined 'true' state for the spatiotemporal interpolation of incomplete morphological data in long-term simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The SES_GR2_Copepod Ingestion on ciliates and phytoplankton dataset is based on samples taken during August-September 2008 in Ionian Sea, Libyan Sea, Southern Aegean Sea and Northern Aegean Sea. Ingestion rates were estimated from experiments performed at all the third priority stations during the cruise according to DoW of Sesame project. Copepods for the experiments were obtained with slow non-quantitative tows from the upper 100 m layer of the water column using 200 µm mesh size nets fitted with a large non-filtering cod end. For the grazing experiments we used the following copepod species: Clausocalanus furcatus, Oithona spp. Temora stylifera and Acartia spp according to the relevant reference (Bamstedt et al. 2000). Copepod clearance rates on ciliates were calculated according to Frost equations (Frost 1972). Ingestion rates were calculated by multiplying clearance rates by the initial standing stocks (Bamstedt et al. 2000).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents a probabilistic method for vehicle detection and tracking through the analysis of monocular images obtained from a vehicle-mounted camera. The method is designed to address the main shortcomings of traditional particle filtering approaches, namely Bayesian methods based on importance sampling, for use in traffic environments. These methods do not scale well when the dimensionality of the feature space grows, which creates significant limitations when tracking multiple objects. Alternatively, the proposed method is based on a Markov chain Monte Carlo (MCMC) approach, which allows efficient sampling of the feature space. The method involves important contributions in both the motion and the observation models of the tracker. Indeed, as opposed to particle filter-based tracking methods in the literature, which typically resort to observation models based on appearance or template matching, in this study a likelihood model that combines appearance analysis with information from motion parallax is introduced. Regarding the motion model, a new interaction treatment is defined based on Markov random fields (MRF) that allows for the handling of possible inter-dependencies in vehicle trajectories. As for vehicle detection, the method relies on a supervised classification stage using support vector machines (SVM). The contribution in this field is twofold. First, a new descriptor based on the analysis of gradient orientations in concentric rectangles is dened. This descriptor involves a much smaller feature space compared to traditional descriptors, which are too costly for real-time applications. Second, a new vehicle image database is generated to train the SVM and made public. The proposed vehicle detection and tracking method is proven to outperform existing methods and to successfully handle challenging situations in the test sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this paper is to show the results of an on-going experience on teaching project management to grade students by following a development scheme of management related competencies on an individual basis. In order to achieve that goal, the students are organized in teams that must solve a problem and manage the development of a feasible solution to satisfy the needs of a client. The innovative component advocated in this paper is the formal introduction of negotiating and virtual team management aspects, as different teams from different universities at different locations and comprising students with different backgrounds must collaborate and compete amongst them. The different learning aspects are identified and the improvement levels are reflected in a rubric that has been designed ad hoc for this experience. Finally, the effort frameworks for the student and instructor have been established according to the requirements of the Bologna paradigms. This experience is developed through a software-based support system allowing blended learning for the theoretical and individual?s work aspects, blogs, wikis, etc., as well as project management tools based on WWW that allow the monitoring of not only the expected deliverables and the achievement of the goals but also the progress made on learning as established in the defined rubric

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Next Generation Networks (NGN) provide Telecommunications operators with the possibility to share their resources and infrastructure, facilitate the interoperability with other networks, and simplify and unify the management, operation and maintenance of service offerings, thus enabling the fast and cost-effective creation of new personal, broadband ubiquitous services. Unfortunately, service creation over NGN is far from the success of service creation in the Web, especially when it comes to Web 2.0. This paper presents a novel approach to service creation and delivery, with a platform that opens to non-technically skilled users the possibility to create, manage and share their own convergent (NGN-based and Web-based) services. To this end, the business approach to user-generated services is analyzed and the technological bases supporting the proposal are explained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enabling real end-user programming development is the next logical stage in the evolution of Internetwide service-based applications. Even so, the vision of end users programming their own web-based solutions has not yet materialized. This will continue to be so unless both industry and the research community rise to the ambitious challenge of devising an end-to-end compositional model for developing a new age of end-user web application development tools. This paper describes a new composition model designed to empower programming-illiterate end users to create and share their own off-the-shelf rich Internet applications in a fully visual fashion. This paper presents the main insights and outcomes of our research and development efforts as part of a number of successful European Union research projects. A framework implementing this model was developed as part of the European Seventh Framework Programme FAST Project and the Spanish EzWeb Project and allowed us to validate the rationale behind our approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cultural content on the Web is available in various domains (cultural objects, datasets, geospatial data, moving images, scholarly texts and visual resources), concerns various topics, is written in different languages, targeted to both laymen and experts, and provided by different communities (libraries, archives museums and information industry) and individuals (Figure 1). The integration of information technologies and cultural heritage content on the Web is expected to have an impact on everyday life from the point of view of institutions, communities and individuals. In particular, collaborative environment scan recreate 3D navigable worlds that can offer new insights into our cultural heritage (Chan 2007). However, the main barrier is to find and relate cultural heritage information by end-users of cultural contents, as well as by organisations and communities managing and producing them. In this paper, we explore several visualisation techniques for supporting cultural interfaces, where the role of metadata is essential for supporting the search and communication among end-users (Figure 2). A conceptual framework was developed to integrate the data, purpose, technology, impact, and form components of a collaborative environment, Our preliminary results show that collaborative environments can help with cultural heritage information sharing and communication tasks because of the way in which they provide a visual context to end-users. They can be regarded as distributed virtual reality systems that offer graphically realised, potentially infinite, digital information landscapes. Moreover, collaborative environments also provide a new way of interaction between an end-user and a cultural heritage data set. Finally, the visualisation of metadata of a dataset plays an important role in helping end-users in their search for heritage contents on the Web.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En esta tesis se aborda la detección y el seguimiento automático de vehículos mediante técnicas de visión artificial con una cámara monocular embarcada. Este problema ha suscitado un gran interés por parte de la industria automovilística y de la comunidad científica ya que supone el primer paso en aras de la ayuda a la conducción, la prevención de accidentes y, en última instancia, la conducción automática. A pesar de que se le ha dedicado mucho esfuerzo en los últimos años, de momento no se ha encontrado ninguna solución completamente satisfactoria y por lo tanto continúa siendo un tema de investigación abierto. Los principales problemas que plantean la detección y seguimiento mediante visión artificial son la gran variabilidad entre vehículos, un fondo que cambia dinámicamente debido al movimiento de la cámara, y la necesidad de operar en tiempo real. En este contexto, esta tesis propone un marco unificado para la detección y seguimiento de vehículos que afronta los problemas descritos mediante un enfoque estadístico. El marco se compone de tres grandes bloques, i.e., generación de hipótesis, verificación de hipótesis, y seguimiento de vehículos, que se llevan a cabo de manera secuencial. No obstante, se potencia el intercambio de información entre los diferentes bloques con objeto de obtener el máximo grado posible de adaptación a cambios en el entorno y de reducir el coste computacional. Para abordar la primera tarea de generación de hipótesis, se proponen dos métodos complementarios basados respectivamente en el análisis de la apariencia y la geometría de la escena. Para ello resulta especialmente interesante el uso de un dominio transformado en el que se elimina la perspectiva de la imagen original, puesto que este dominio permite una búsqueda rápida dentro de la imagen y por tanto una generación eficiente de hipótesis de localización de los vehículos. Los candidatos finales se obtienen por medio de un marco colaborativo entre el dominio original y el dominio transformado. Para la verificación de hipótesis se adopta un método de aprendizaje supervisado. Así, se evalúan algunos de los métodos de extracción de características más populares y se proponen nuevos descriptores con arreglo al conocimiento de la apariencia de los vehículos. Para evaluar la efectividad en la tarea de clasificación de estos descriptores, y dado que no existen bases de datos públicas que se adapten al problema descrito, se ha generado una nueva base de datos sobre la que se han realizado pruebas masivas. Finalmente, se presenta una metodología para la fusión de los diferentes clasificadores y se plantea una discusión sobre las combinaciones que ofrecen los mejores resultados. El núcleo del marco propuesto está constituido por un método Bayesiano de seguimiento basado en filtros de partículas. Se plantean contribuciones en los tres elementos fundamentales de estos filtros: el algoritmo de inferencia, el modelo dinámico y el modelo de observación. En concreto, se propone el uso de un método de muestreo basado en MCMC que evita el elevado coste computacional de los filtros de partículas tradicionales y por consiguiente permite que el modelado conjunto de múltiples vehículos sea computacionalmente viable. Por otra parte, el dominio transformado mencionado anteriormente permite la definición de un modelo dinámico de velocidad constante ya que se preserva el movimiento suave de los vehículos en autopistas. Por último, se propone un modelo de observación que integra diferentes características. En particular, además de la apariencia de los vehículos, el modelo tiene en cuenta también toda la información recibida de los bloques de procesamiento previos. El método propuesto se ejecuta en tiempo real en un ordenador de propósito general y da unos resultados sobresalientes en comparación con los métodos tradicionales. ABSTRACT This thesis addresses on-road vehicle detection and tracking with a monocular vision system. This problem has attracted the attention of the automotive industry and the research community as it is the first step for driver assistance and collision avoidance systems and for eventual autonomous driving. Although many effort has been devoted to address it in recent years, no satisfactory solution has yet been devised and thus it is an active research issue. The main challenges for vision-based vehicle detection and tracking are the high variability among vehicles, the dynamically changing background due to camera motion and the real-time processing requirement. In this thesis, a unified approach using statistical methods is presented for vehicle detection and tracking that tackles these issues. The approach is divided into three primary tasks, i.e., vehicle hypothesis generation, hypothesis verification, and vehicle tracking, which are performed sequentially. Nevertheless, the exchange of information between processing blocks is fostered so that the maximum degree of adaptation to changes in the environment can be achieved and the computational cost is alleviated. Two complementary strategies are proposed to address the first task, i.e., hypothesis generation, based respectively on appearance and geometry analysis. To this end, the use of a rectified domain in which the perspective is removed from the original image is especially interesting, as it allows for fast image scanning and coarse hypothesis generation. The final vehicle candidates are produced using a collaborative framework between the original and the rectified domains. A supervised classification strategy is adopted for the verification of the hypothesized vehicle locations. In particular, state-of-the-art methods for feature extraction are evaluated and new descriptors are proposed by exploiting the knowledge on vehicle appearance. Due to the lack of appropriate public databases, a new database is generated and the classification performance of the descriptors is extensively tested on it. Finally, a methodology for the fusion of the different classifiers is presented and the best combinations are discussed. The core of the proposed approach is a Bayesian tracking framework using particle filters. Contributions are made on its three key elements: the inference algorithm, the dynamic model and the observation model. In particular, the use of a Markov chain Monte Carlo method is proposed for sampling, which circumvents the exponential complexity increase of traditional particle filters thus making joint multiple vehicle tracking affordable. On the other hand, the aforementioned rectified domain allows for the definition of a constant-velocity dynamic model since it preserves the smooth motion of vehicles in highways. Finally, a multiple-cue observation model is proposed that not only accounts for vehicle appearance but also integrates the available information from the analysis in the previous blocks. The proposed approach is proven to run near real-time in a general purpose PC and to deliver outstanding results compared to traditional methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, Internet is a place where social networks have reached an important impact in collaboration among people over the world in different ways. This article proposes a new paradigm for building CSCW business tools following the novel ideas provided by the social web to collaborate and generate awareness. An implementation of these concepts is described, including the components we provide to collaborate in workspaces, (such as videoconference, chat, desktop sharing, forums or temporal events), and the way we generate awareness from these complex social data structures. Figures and validation results are also presented to stress that this architecture has been defined to support awareness generation via joining current and future social data from business and social networks worlds, based on the idea of using social data stored in the cloud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Semantic technologies have become widely adopted in recent years, and choosing the right technologies for the problems that users face is often a difficult task. This paper presents an application of the Analytic Network Process for the recommendation of semantic technologies, which is based on a quality model for semantic technologies. Instead of relying on expert-based comparisons of alternatives, the comparisons in our framework depend on real evaluation results. Furthermore, the recommendations in our framework derive from user quality requirements, which leads to better recommendations tailored to users’ needs. This paper also presents an algorithm for pairwise comparisons, which is based on user quality requirements and evaluation results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Managing large medical image collections is an increasingly demanding important issue in many hospitals and other medical settings. A huge amount of this information is daily generated, which requires robust and agile systems. In this paper we present a distributed multi-agent system capable of managing very large medical image datasets. In this approach, agents extract low-level information from images and store them in a data structure implemented in a relational database. The data structure can also store semantic information related to images and particular regions. A distinctive aspect of our work is that a single image can be divided so that the resultant sub-images can be stored and managed separately by different agents to improve performance in data accessing and processing. The system also offers the possibility of applying some region-based operations and filters on images, facilitating image classification. These operations can be performed directly on data structures in the database.