940 resultados para Multi- Choice mixed integer goal programming
Resumo:
Fragestellung/Einleitung: Es ist unklar inwiefern Unterschiede bestehen im Einsatz von Key Feature Problemen (KFP) mit Long Menu Fragen und fallbasierten Typ A Fragen (FTA) für die Überprüfung des klinischen Denkens (Clinical Reasoning) in der klinischen Ausbildung von Medizinstudierenden. Methoden: Medizinstudierende des fünften Studienjahres nahmen an ihrer klinischen Pädiatrie-Rotation teil, die mit einer summativen Prüfung endete. Die Überprüfung des Wissen wurde pro Prüfung elektronisch mit 6-9 KFP [1], [3], 9-20 FTA und 9-28 nichtfallbasierten Multiple Choice Fragen (NFTA) durchgeführt. Jedes KFP bestand aus einer Fallvignette und drei Key Features und nutzen ein sog. Long Menu [4] als Antwortformat. Wir untersuchten die Perzeption der KFP und FTA in Focus Gruppen [2] (n of students=39). Weiterhin wurden die statistischen Kennwerte der KFP und FTA von 11 Prüfungen (n of students=377) verglichen. Ergebnisse: Die Analyse der Fokusgruppen resultierte in vier Themen, die die Perzeption der KFP und deren Vergleich mit FTA darstellten: KFP wurden als 1. realistischer, 2. schwerer, und 3. motivierender für das intensive Selbststudium des klinischen Denkens als FTA aufgenommen und zeigten 4. insgesamt eine gute Akzeptanz sofern gewisse Voraussetzungen berücksichtigt werden. Die statistische Auswertung zeigte keinen Unterschied im Schwierigkeitsgrad; jedoch zeigten die KFP eine höhere Diskrimination und Reliabilität (G-coefficient) selbst wenn für die Prüfungszeit korrigiert wurde. Die Korrelation der verschiedenen Prüfungsteile war mittel. Diskussion/Schlussfolgerung: Die Studierenden erfuhren die KFP als motivierenden für das Selbststudium des klinischen Denkens. Statistisch zeigten die KFP eine grössere Diskrimination und höhere Relibilität als die FTA. Der Einbezug von KFP mit Long Menu in Prüfungen des klinischen Studienabschnitts erscheint vielversprechend und einen „educational effect“ zu haben.
Resumo:
Efforts to understand and model the dynamics of the upper ocean would be significantly advanced given the ability to rapidly determine mixed layer depths (MLDs) over large regions. Remote sensing technologies are an ideal choice for achieving this goal. This study addresses the feasibility of estimating MLDs from optical properties. These properties are strongly influenced by suspended particle concentrations, which generally reach a maximum at pycnoclines. The premise therefore is to use a gradient in beam attenuation at 660 nm (c660) as a proxy for the depth of a particle-scattering layer. Using a global data set collected during World Ocean Circulation Experiment cruises from 1988-1997, six algorithms were employed to compute MLDs from either density or temperature profiles. Given the absence of published optically based MLD algorithms, two new methods were developed that use c660 profiles to estimate the MLD. Intercomparison of the six hydrographically based algorithms revealed some significant disparities among the resulting MLD values. Comparisons between the hydrographical and optical approaches indicated a first-order agreement between the MLDs based on the depths of gradient maxima for density and c660. When comparing various hydrographically based algorithms, other investigators reported that inherent fluctuations of the mixed layer depth limit the accuracy of its determination to 20 m. Using this benchmark, we found a similar to 70% agreement between the best hydrographical-optical algorithm pairings.
Resumo:
Anticancer drugs typically are administered in the clinic in the form of mixtures, sometimes called combinations. Only in rare cases, however, are mixtures approved as drugs. Rather, research on mixtures tends to occur after single drugs have been approved. The goal of this research project was to develop modeling approaches that would encourage rational preclinical mixture design. To this end, a series of models were developed. First, several QSAR classification models were constructed to predict the cytotoxicity, oral clearance, and acute systemic toxicity of drugs. The QSAR models were applied to a set of over 115,000 natural compounds in order to identify promising ones for testing in mixtures. Second, an improved method was developed to assess synergistic, antagonistic, and additive effects between drugs in a mixture. This method, dubbed the MixLow method, is similar to the Median-Effect method, the de facto standard for assessing drug interactions. The primary difference between the two is that the MixLow method uses a nonlinear mixed-effects model to estimate parameters of concentration-effect curves, rather than an ordinary least squares procedure. Parameter estimators produced by the MixLow method were more precise than those produced by the Median-Effect Method, and coverage of Loewe index confidence intervals was superior. Third, a model was developed to predict drug interactions based on scores obtained from virtual docking experiments. This represents a novel approach for modeling drug mixtures and was more useful for the data modeled here than competing approaches. The model was applied to cytotoxicity data for 45 mixtures, each composed of up to 10 selected drugs. One drug, doxorubicin, was a standard chemotherapy agent and the others were well-known natural compounds including curcumin, EGCG, quercetin, and rhein. Predictions of synergism/antagonism were made for all possible fixed-ratio mixtures, cytotoxicities of the 10 best-scoring mixtures were tested, and drug interactions were assessed. Predicted and observed responses were highly correlated (r2 = 0.83). Results suggested that some mixtures allowed up to an 11-fold reduction of doxorubicin concentrations without sacrificing efficacy. Taken together, the models developed in this project present a general approach to rational design of mixtures during preclinical drug development. ^
Resumo:
The current standard treatment for head and neck cancer at our institution uses intensity-modulated x-ray therapy (IMRT), which improves target coverage and sparing of critical structures by delivering complex fluence patterns from a variety of beam directions to conform dose distributions to the shape of the target volume. The standard treatment for breast patients is field-in-field forward-planned IMRT, with initial tangential fields and additional reduced-weight tangents with blocking to minimize hot spots. For these treatment sites, the addition of electrons has the potential of improving target coverage and sparing of critical structures due to rapid dose falloff with depth and reduced exit dose. In this work, the use of mixed-beam therapy (MBT), i.e., combined intensity-modulated electron and x-ray beams using the x-ray multi-leaf collimator (MLC), was explored. The hypothesis of this study was that addition of intensity-modulated electron beams to existing clinical IMRT plans would produce MBT plans that were superior to the original IMRT plans for at least 50% of selected head and neck and 50% of breast cases. Dose calculations for electron beams collimated by the MLC were performed with Monte Carlo methods. An automation system was created to facilitate communication between the dose calculation engine and the treatment planning system. Energy and intensity modulation of the electron beams was accomplished by dividing the electron beams into 2x2-cm2 beamlets, which were then beam-weight optimized along with intensity-modulated x-ray beams. Treatment plans were optimized to obtain equivalent target dose coverage, and then compared with the original treatment plans. MBT treatment plans were evaluated by participating physicians with respect to target coverage, normal structure dose, and overall plan quality in comparison with original clinical plans. The physician evaluations did not support the hypothesis for either site, with MBT selected as superior in 1 out of the 15 head and neck cases (p=1) and 6 out of 18 breast cases (p=0.95). While MBT was not shown to be superior to IMRT, reductions were observed in doses to critical structures distal to the target along the electron beam direction and to non-target tissues, at the expense of target coverage and dose homogeneity. ^
Resumo:
Community-based participatory research necessitates that community members act as partners in decision making and mutual learning and discovery. In the same light, for programs/issues involving youth, youth should be partners in knowledge sharing and evaluation (Checkoway & Richards-Schuster, 2004). This study is a youth-focused empowerment evaluation for the Successful Youth program. Successful Youth is a multi-component youth development after-school program for Latino middle school youth, created with the goal of reducing teen pregnancy. An empowerment evaluation is collaborative and participatory (Balcazar and Harper 2003). The three steps of an empowerment evaluation are: (1) defining mission, (2) taking stock, and (3) planning for the future (Fetterman 2001).^ In a program where youth are developing leadership skills, making choices, and learning how to self reflect and evaluate, the empowerment evaluation could not be more aligned with promoting and enhancing these skills. In addition, an empowerment evaluation is designed to "foster improvement and self-determination" and "build capacity" (Fetterman 2001). Four empowerment groups were conducted with approximately 6-9 Latino 7th grade students per group. All participants were enrolled in the Successful Youth program. Results indicate points where students' perceptions of the program were aligned with the program's mission and where gaps were identified. Students offered recommendations for program improvements. Additionally, students enjoyed expressing their feelings about the program and appreciated that their opinions were valued. Youth recommendations will be brought to program staff; and, where possible, gaps will be addressed. Empowerment evaluations with youth will continue during the duration of the program so that youth involvement and input remains integral in the evaluation and to ascertain whether the program's goals are being met. ^
Resumo:
Background. The CDC estimates that 40% of adults 50 years of age or older do not receive time-appropriate colorectal cancer screening. Sixty percent of colorectal cancer deaths could be prevented by regular screening of adults 50 years of age and older. Yet, in 2000 only 42.5% of adults age 50 or older in the U.S. had received recommended screening. Disparities by health care, nativity status, socioeconomic status, and race/ethnicity are evident. Disparities in minority, underserved populations prevent us from attaining Goal 2 of Healthy People 2010 to “eliminate health disparities.” This review focuses on community-based screening research among underserved populations that includes multiple ethnic groups for appropriate disparities analysis. There is a gap in the colorectal cancer screening literature describing the effectiveness of community-based randomized controlled trials. ^ Objective. To critically review the literature describing community-based colorectal cancer screening strategies that are randomized controlled trials, and that include multiple racial/ethnic groups. ^ Methods. The review includes a preliminary disparities analysis to assess whether interventions were appropriately targeted in communities to those groups experiencing the greatest health disparities. Review articles are from an original search using Ovid Medline and a cross-matching search in Pubmed, both from January 2001 to June 2009. The Ovid Medline literature review is divided into eight exclusionary stages, seven electronic, and the last stage consisting of final manual review. ^ Results. The final studies (n=15) are categorized into four categories: Patient mailings (n=3), Telephone outreach (n=3), Electronic/multimedia (n=4), and Counseling/community education (n=5). Of 15 studies, 11 (73%) demonstrated that screening rates increased for the intervention group compared to controls, including all studies (100%) from the Patient mailings and Telephone outreach groups, 4 of 5 (80%) Counseling/community education studies, and 1 of 4 (25%) Electronic/multimedia interventions. ^ Conclusions. Patient choice and tailoring education and/or messages to individuals have proven to be two important factors in improving colorectal cancer screening adherence rates. Technological strategies have not been overly successful with underserved populations in community-based trials. Based on limited findings to date, future community-based colorectal cancer screening trials should include diverse populations who are experiencing incidence, survival, mortality and screening disparities. ^
Resumo:
Distributed parallel execution systems speed up applications by splitting tasks into processes whose execution is assigned to different receiving nodes in a high-bandwidth network. On the distributing side, a fundamental problem is grouping and scheduling such tasks such that each one involves sufñcient computational cost when compared to the task creation and communication costs and other such practical overheads. On the receiving side, an important issue is to have some assurance of the correctness and characteristics of the code received and also of the kind of load the particular task is going to pose, which can be specified by means of certificates. In this paper we present in a tutorial way a number of general solutions to these problems, and illustrate them through their implementation in the Ciao multi-paradigm language and program development environment. This system includes facilities for parallel and distributed execution, an assertion language for specifying complex programs properties (including safety and resource-related properties), and compile-time and run-time tools for performing automated parallelization and resource control, as well as certification of programs with resource consumption assurances and efñcient checking of such certificates.
Resumo:
This paper presents an ant colony optimization algorithm to sequence the mixed assembly lines considering the inventory and the replenishment of components. This is a NP-problem that cannot be solved to optimality by exact methods when the size of the problem growth. Groups of specialized ants are implemented to solve the different parts of the problem. This is intended to differentiate each part of the problem. Different types of pheromone structures are created to identify good car sequences, and good routes for the replenishment of components vehicle. The contribution of this paper is the collaborative approach of the ACO for the mixed assembly line and the replenishment of components and the jointly solution of the problem.
Resumo:
Los sistemas empotrados son cada día más comunes y complejos, de modo que encontrar procesos seguros, eficaces y baratos de desarrollo software dirigidos específicamente a esta clase de sistemas es más necesario que nunca. A diferencia de lo que ocurría hasta hace poco, en la actualidad los avances tecnológicos en el campo de los microprocesadores de los últimos tiempos permiten el desarrollo de equipos con prestaciones más que suficientes para ejecutar varios sistemas software en una única máquina. Además, hay sistemas empotrados con requisitos de seguridad (safety) de cuyo correcto funcionamiento depende la vida de muchas personas y/o grandes inversiones económicas. Estos sistemas software se diseñan e implementan de acuerdo con unos estándares de desarrollo software muy estrictos y exigentes. En algunos casos puede ser necesaria también la certificación del software. Para estos casos, los sistemas con criticidades mixtas pueden ser una alternativa muy valiosa. En esta clase de sistemas, aplicaciones con diferentes niveles de criticidad se ejecutan en el mismo computador. Sin embargo, a menudo es necesario certificar el sistema entero con el nivel de criticidad de la aplicación más crítica, lo que hace que los costes se disparen. La virtualización se ha postulado como una tecnología muy interesante para contener esos costes. Esta tecnología permite que un conjunto de máquinas virtuales o particiones ejecuten las aplicaciones con unos niveles de aislamiento tanto temporal como espacial muy altos. Esto, a su vez, permite que cada partición pueda ser certificada independientemente. Para el desarrollo de sistemas particionados con criticidades mixtas se necesita actualizar los modelos de desarrollo software tradicionales, pues estos no cubren ni las nuevas actividades ni los nuevos roles que se requieren en el desarrollo de estos sistemas. Por ejemplo, el integrador del sistema debe definir las particiones o el desarrollador de aplicaciones debe tener en cuenta las características de la partición donde su aplicación va a ejecutar. Tradicionalmente, en el desarrollo de sistemas empotrados, el modelo en V ha tenido una especial relevancia. Por ello, este modelo ha sido adaptado para tener en cuenta escenarios tales como el desarrollo en paralelo de aplicaciones o la incorporación de una nueva partición a un sistema ya existente. El objetivo de esta tesis doctoral es mejorar la tecnología actual de desarrollo de sistemas particionados con criticidades mixtas. Para ello, se ha diseñado e implementado un entorno dirigido específicamente a facilitar y mejorar los procesos de desarrollo de esta clase de sistemas. En concreto, se ha creado un algoritmo que genera el particionado del sistema automáticamente. En el entorno de desarrollo propuesto, se han integrado todas las actividades necesarias para desarrollo de un sistema particionado, incluidos los nuevos roles y actividades mencionados anteriormente. Además, el diseño del entorno de desarrollo se ha basado en la ingeniería guiada por modelos (Model-Driven Engineering), la cual promueve el uso de los modelos como elementos fundamentales en el proceso de desarrollo. Así pues, se proporcionan las herramientas necesarias para modelar y particionar el sistema, así como para validar los resultados y generar los artefactos necesarios para el compilado, construcción y despliegue del mismo. Además, en el diseño del entorno de desarrollo, la extensión e integración del mismo con herramientas de validación ha sido un factor clave. En concreto, se pueden incorporar al entorno de desarrollo nuevos requisitos no-funcionales, la generación de nuevos artefactos tales como documentación o diferentes lenguajes de programación, etc. Una parte clave del entorno de desarrollo es el algoritmo de particionado. Este algoritmo se ha diseñado para ser independiente de los requisitos de las aplicaciones así como para permitir al integrador del sistema implementar nuevos requisitos del sistema. Para lograr esta independencia, se han definido las restricciones al particionado. El algoritmo garantiza que dichas restricciones se cumplirán en el sistema particionado que resulte de su ejecución. Las restricciones al particionado se han diseñado con una capacidad expresiva suficiente para que, con un pequeño grupo de ellas, se puedan expresar la mayor parte de los requisitos no-funcionales más comunes. Las restricciones pueden ser definidas manualmente por el integrador del sistema o bien pueden ser generadas automáticamente por una herramienta a partir de los requisitos funcionales y no-funcionales de una aplicación. El algoritmo de particionado toma como entradas los modelos y las restricciones al particionado del sistema. Tras la ejecución y como resultado, se genera un modelo de despliegue en el que se definen las particiones que son necesarias para el particionado del sistema. A su vez, cada partición define qué aplicaciones deben ejecutar en ella así como los recursos que necesita la partición para ejecutar correctamente. El problema del particionado y las restricciones al particionado se modelan matemáticamente a través de grafos coloreados. En dichos grafos, un coloreado propio de los vértices representa un particionado del sistema correcto. El algoritmo se ha diseñado también para que, si es necesario, sea posible obtener particionados alternativos al inicialmente propuesto. El entorno de desarrollo, incluyendo el algoritmo de particionado, se ha probado con éxito en dos casos de uso industriales: el satélite UPMSat-2 y un demostrador del sistema de control de una turbina eólica. Además, el algoritmo se ha validado mediante la ejecución de numerosos escenarios sintéticos, incluyendo algunos muy complejos, de más de 500 aplicaciones. ABSTRACT The importance of embedded software is growing as it is required for a large number of systems. Devising cheap, efficient and reliable development processes for embedded systems is thus a notable challenge nowadays. Computer processing power is continuously increasing, and as a result, it is currently possible to integrate complex systems in a single processor, which was not feasible a few years ago.Embedded systems may have safety critical requirements. Its failure may result in personal or substantial economical loss. The development of these systems requires stringent development processes that are usually defined by suitable standards. In some cases their certification is also necessary. This scenario fosters the use of mixed-criticality systems in which applications of different criticality levels must coexist in a single system. In these cases, it is usually necessary to certify the whole system, including non-critical applications, which is costly. Virtualization emerges as an enabling technology used for dealing with this problem. The system is structured as a set of partitions, or virtual machines, that can be executed with temporal and spatial isolation. In this way, applications can be developed and certified independently. The development of MCPS (Mixed-Criticality Partitioned Systems) requires additional roles and activities that traditional systems do not require. The system integrator has to define system partitions. Application development has to consider the characteristics of the partition to which it is allocated. In addition, traditional software process models have to be adapted to this scenario. The V-model is commonly used in embedded systems development. It can be adapted to the development of MCPS by enabling the parallel development of applications or adding an additional partition to an existing system. The objective of this PhD is to improve the available technology for MCPS development by providing a framework tailored to the development of this type of system and by defining a flexible and efficient algorithm for automatically generating system partitionings. The goal of the framework is to integrate all the activities required for developing MCPS and to support the different roles involved in this process. The framework is based on MDE (Model-Driven Engineering), which emphasizes the use of models in the development process. The framework provides basic means for modeling the system, generating system partitions, validating the system and generating final artifacts. The framework has been designed to facilitate its extension and the integration of external validation tools. In particular, it can be extended by adding support for additional non-functional requirements and support for final artifacts, such as new programming languages or additional documentation. The framework includes a novel partitioning algorithm. It has been designed to be independent of the types of applications requirements and also to enable the system integrator to tailor the partitioning to the specific requirements of a system. This independence is achieved by defining partitioning constraints that must be met by the resulting partitioning. They have sufficient expressive capacity to state the most common constraints and can be defined manually by the system integrator or generated automatically based on functional and non-functional requirements of the applications. The partitioning algorithm uses system models and partitioning constraints as its inputs. It generates a deployment model that is composed by a set of partitions. Each partition is in turn composed of a set of allocated applications and assigned resources. The partitioning problem, including applications and constraints, is modeled as a colored graph. A valid partitioning is a proper vertex coloring. A specially designed algorithm generates this coloring and is able to provide alternative partitions if required. The framework, including the partitioning algorithm, has been successfully used in the development of two industrial use cases: the UPMSat-2 satellite and the control system of a wind-power turbine. The partitioning algorithm has been successfully validated by using a large number of synthetic loads, including complex scenarios with more that 500 applications.
Resumo:
Questa tesi ha come obiettivo la sperimentazione del nuovo sistema operativo Windows 10 IoT Core su tecnologia Raspberry Pi 2, verificandone la compatibilita con alcuni sensori in commercio. Tale studio viene poi applicato in un contesto di Home Intelligence al fine di creare un agente per la gestione di luci LED, in prospettiva della sua integrazione nel sistema prototipale Home Manager.
Resumo:
"UILU-ENG 77 1711."
Resumo:
Thesis (M.S.)--University of Illinois at Urbana-Champaign.
Resumo:
Bibliography: p. 44.
Resumo:
Thesis (M.S.)--Illinois.