829 resultados para Conceptual designs
Resumo:
In making the arrangements for the visit of Pope John Paul II to San Antonio, Texas, in September, 1987, it was discovered that no comprehensive documents or guidelines are available in the public sector for planning such an event. It was not clear which, if any, laws applied. The literature describes rock concerts, papal masses, and civil disorders. These events are held in stadia, and in the open. There was little agreement on what services, if any, were needed to protect the public's health and the environment; or if needed, how services should be provided, or by whom.^ A literature review and bibliography are given to provide greater understanding of the variety of mass gatherings and the many factors that impinge on temporary groups while away from their homes. Descriptions of past mass gatherings in terms of personnel ratios are provided. This study develops a conceptual model which delineates some of the known parameters necessary for successfully conducting a mass gathering. A study of one such site is given.^ Provisions for public wellness and freedom from disease at a mass gathering include adequate water (fluids), food, sanitary facilities, security, transportation, and medical services. The determination of adequacy of these provisions is discussed. Methods of determining the use of provided facilities are given. ^
Resumo:
Treating patients with combined agents is a growing trend in cancer clinical trials. Evaluating the synergism of multiple drugs is often the primary motivation for such drug-combination studies. Focusing on the drug combination study in the early phase clinical trials, our research is composed of three parts: (1) We conduct a comprehensive comparison of four dose-finding designs in the two-dimensional toxicity probability space and propose using the Bayesian model averaging method to overcome the arbitrariness of the model specification and enhance the robustness of the design; (2) Motivated by a recent drug-combination trial at MD Anderson Cancer Center with a continuous-dose standard of care agent and a discrete-dose investigational agent, we propose a two-stage Bayesian adaptive dose-finding design based on an extended continual reassessment method; (3) By combining phase I and phase II clinical trials, we propose an extension of a single agent dose-finding design. We model the time-to-event toxicity and efficacy to direct dose finding in two-dimensional drug-combination studies. We conduct extensive simulation studies to examine the operating characteristics of the aforementioned designs and demonstrate the designs' good performances in various practical scenarios.^
Resumo:
My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.
Resumo:
There are two practical challenges in the phase I clinical trial conduct: lack of transparency to physicians, and the late onset toxicity. In my dissertation, Bayesian approaches are used to address these two problems in clinical trial designs. The proposed simple optimal designs cast the dose finding problem as a decision making process for dose escalation and deescalation. The proposed designs minimize the incorrect decision error rate to find the maximum tolerated dose (MTD). For the late onset toxicity problem, a Bayesian adaptive dose-finding design for drug combination is proposed. The dose-toxicity relationship is modeled using the Finney model. The unobserved delayed toxicity outcomes are treated as missing data and Bayesian data augment is employed to handle the resulting missing data. Extensive simulation studies have been conducted to examine the operating characteristics of the proposed designs and demonstrated the designs' good performances in various practical scenarios.^
Resumo:
Early phase clinical trial designs have long been the focus of interest for clinicians and statisticians working in oncology field. There are several standard phse I and phase II designs that have been widely-implemented in medical practice. For phase I design, the most commonly used methods are 3+3 and CRM. A newly-developed Bayesian model-based mTPI design has now been used by an increasing number of hospitals and pharmaceutical companies. The advantages and disadvantages of these three top phase I designs have been discussed in my work here and their performances were compared using simulated data. It was shown that mTPI design exhibited superior performance in most scenarios in comparison with 3+3 and CRM designs. ^ The next major part of my work is proposing an innovative seamless phase I/II design that allows clinicians to conduct phase I and phase II clinical trials simultaneously. Bayesian framework was implemented throughout the whole design. The phase I portion of the design adopts mTPI method, with the addition of futility rule which monitors the efficacy performance of the tested drugs. Dose graduation rules were proposed in this design to allow doses move forward from phase I portion of the study to phase II portion without interrupting the ongoing phase I dose-finding schema. Once a dose graduated to phase II, adaptive randomization was used to randomly allocated patients into different treatment arms, with the intention of more patients being assigned to receive more promising dose(s). Again simulations were performed to compare the performance of this innovative phase I/II design with a recently published phase I/II design, together with the conventional phase I and phase II designs. The simulation results indicated that the seamless phase I/II design outperform the other two competing methods in most scenarios, with superior trial power and the fact that it requires smaller sample size. It also significantly reduces the overall study time. ^ Similar to other early phase clinical trial designs, the proposed seamless phase I/II design requires that the efficacy and safety outcomes being able to be observed in a short time frame. This limitation can be overcome by using validated surrogate marker for the efficacy and safety endpoints.^
Resumo:
Background: The failure rate of health information systems is high, partially due to fragmented, incomplete, or incorrect identification and description of specific and critical domain requirements. In order to systematically transform the requirements of work into real information system, an explicit conceptual framework is essential to summarize the work requirements and guide system design. Recently, Butler, Zhang, and colleagues proposed a conceptual framework called Work Domain Ontology (WDO) to formally represent users’ work. This WDO approach has been successfully demonstrated in a real world design project on aircraft scheduling. However, as a top level conceptual framework, this WDO has not defined an explicit and well specified schema (WDOS) , and it does not have a generalizable and operationalized procedure that can be easily applied to develop WDO. Moreover, WDO has not been developed for any concrete healthcare domain. These limitations hinder the utility of WDO in real world information system in general and in health information system in particular. Objective: The objective of this research is to formalize the WDOS, operationalize a procedure to develop WDO, and evaluate WDO approach using Self-Nutrition Management (SNM) work domain. Method: Concept analysis was implemented to formalize WDOS. Focus group interview was conducted to capture concepts in SNM work domain. Ontology engineering methods were adopted to model SNM WDO. Part of the concepts under the primary goal “staying healthy” for SNM were selected and transformed into a semi-structured survey to evaluate the acceptance, explicitness, completeness, consistency, experience dependency of SNM WDO. Result: Four concepts, “goal, operation, object and constraint”, were identified and formally modeled in WDOS with definitions and attributes. 72 SNM WDO concepts under primary goal were selected and transformed into semi-structured survey questions. The evaluation indicated that the major concepts of SNM WDO were accepted by 41 overweight subjects. SNM WDO is generally independent of user domain experience but partially dependent on SNM application experience. 23 of 41 paired concepts had significant correlations. Two concepts were identified as ambiguous concepts. 8 extra concepts were recommended towards the completeness of SNM WDO. Conclusion: The preliminary WDOS is ready with an operationalized procedure. SNM WDO has been developed to guide future SNM application design. This research is an essential step towards Work-Centered Design (WCD).
Resumo:
Background: For most cytotoxic and biologic anti-cancer agents, the response rate of the drug is commonly assumed to be non-decreasing with an increasing dose. However, an increasing dose does not always result in an appreciable increase in the response rate. This may especially be true at high doses for a biologic agent. Therefore, in a phase II trial the investigators may be interested in testing the anti-tumor activity of a drug at more than one (often two) doses, instead of only at the maximum tolerated dose (MTD). This way, when the lower dose appears equally effective, this dose can be recommended for further confirmatory testing in a phase III trial under potential long-term toxicity and cost considerations. A common approach to designing such a phase II trial has been to use an independent (e.g., Simon's two-stage) design at each dose ignoring the prior knowledge about the ordering of the response probabilities at the different doses. However, failure to account for this ordering constraint in estimating the response probabilities may result in an inefficient design. In this dissertation, we developed extensions of Simon's optimal and minimax two-stage designs, including both frequentist and Bayesian methods, for two doses that assume ordered response rates between doses. ^ Methods: Optimal and minimax two-stage designs are proposed for phase II clinical trials in settings where the true response rates at two dose levels are ordered. We borrow strength between doses using isotonic regression and control the joint and/or marginal error probabilities. Bayesian two-stage designs are also proposed under a stochastic ordering constraint. ^ Results: Compared to Simon's designs, when controlling the power and type I error at the same levels, the proposed frequentist and Bayesian designs reduce the maximum and expected sample sizes. Most of the proposed designs also increase the probability of early termination when the true response rates are poor. ^ Conclusion: Proposed frequentist and Bayesian designs are superior to Simon's designs in terms of operating characteristics (expected sample size and probability of early termination, when the response rates are poor) Thus, the proposed designs lead to more cost-efficient and ethical trials, and may consequently improve and expedite the drug discovery process. The proposed designs may be extended to designs of multiple group trials and drug combination trials.^
Resumo:
Se intenta esclarecer cómo ciertos factores materiales, en determinados momentos históricos, influyen en el retraso del surgimiento de un corpus teórico que dé cuenta, con la mayor exactitud, de la realidad social. Más precisamente, se pretende evidenciar las condiciones materiales que obstaculizan el surgimiento del concepto de sistema capitalista global, considerando que su objeto real ya estaba plenamente presente hacia la segunda mitad del siglo XIX. Para esto se ilustraran algunos esbozos del concepto, elaborados desde la periferia del sistema global que, independientemente de su pertinente enunciación, fueron bloqueados en su desarrollo por condiciones políticas e ideológicas particulares.
Resumo:
Desde la Universidad Nacional de Cuyo y en articulación con la Dirección General de Escuelas del Gobierno Provincial, acercamos a todos los alumnos que tengan intenciones de continuar sus estudios superiores el presente material que contiene herramientas que permiten nivelar conocimientos y lograr la preparación básica necesaria para el ingreso a cualquier estudio de nivel superior. Así, la intención de esta propuesta y de este material supone promover la igualdad de oportunidades para el ingreso a la Universidad; generar instancias de articulación entre el nivel Polimodal y el universitario y desarrollar competencias básicas a través de la modalidad de educación a distancia. Este modelo, que ha servido de sustento al programa de Articulación entre la Universidad y el Nivel Polimodal (Comprensión lectora), es el resultado de más de 15 años de trabajo, tanto a nivel de investigación como de docencia. Este marco de análisis ha seguido el desarrollo de los estudios lingüísticos, psicolingüísticos, semióticos y psicológicos a lo largo de los últimos veinte años y de los resultados de las experiencias en capacitaciones y en varios cursos de Nivelación de la Universidad Nacional de Cuyo.
Resumo:
Fil: Pagani, María Laura. Universidad Nacional de La Plata. Facultad de Humanidades y Ciencias de la Educación; Argentina.
Resumo:
Fil: Gambarotta, Emiliano Matías. Universidad Nacional de La Plata. Facultad de Humanidades y Ciencias de la Educación. Instituto de Investigaciones en Humanidades y Ciencias Sociales (UNLP-CONICET); Argentina.
Resumo:
La actividad experimental, genera un espacio para que los estudiantes tomen decisiones y desarrollen estrategias para el tratamiento científico de una situación problemática. En este sentido, los contenidos conceptuales se interrelacionan con la actividad del sujeto y orientan el proceso de resolución y el análisis e interpretación de los resultados obtenidos. Desde esta perspectiva esta investigación se orientó hacia el reconocimiento de los niveles de integración conceptual que alcanzan en Física I los estudiantes universitarios de carreras de ingeniería cuando resuelven una situación como problema experimental semi-estructurado en el área de la Dinámica de la partícula. Se analizan los procesos de conceptualización desarrollados y se establecen modos de razonamiento a partir de categorías y modalidades conceptuales definidas a priori. Finalmente, en base a las características detectadas se identificaron las contribuciones que emergieron de la conceptualización de la actividad experimental
Resumo:
Pn este trabajo se intenta principalmente realizar una aproximación teórica a los conceptos de desastre urbano, vulnerabilidad social, degradación ambiental y riesgo; desde una perspectiva que tenga en cuenta las siguientes consideraciones. -) Los desastres desencadenados por fenómenos naturales son procesos sociales, económicos y políticos. -) La degradación ambiental en el ámbito urbano. -) La importancia de la percepción y la construcción social en el concepto de riesgo. -) La vulnerabilidad considerada como producto histórico y su relación con los fenómenos naturales. Dentro de este contexto, se destacará la relevancia de un proceso de gestión continuo e integrado, que incluya estrategias que tiendan a disminuir la vulnerabilidad y, en consecuencia el riesgo de generación de desastres que tienen como detonante fenómenos naturales. Por último se agrega un comentario final y bibliografía sobre el tema que esperamos sea de utilidad para trabajos posteriores.
Resumo:
Masten y Gewirtz (2006) sostienen que si bien desde siempre las historias que narran el triunfo de una persona frente a la adversidad han ejercido fascinación sobre la gente, el estudio científico de la resiliencia se inició entre los años sesenta y setenta. En 1990 Rutter sostuvo que el interés por conocer las características de aquellas personas que desarrollan 'resiliencia' a pesar de las condiciones adversas de crianza o en circunstancias que aumentan el riesgo de presentar psicopatologías, provenía de tres fuentes: una, el aumento y consistencia de datos empíricos sobre diferencias individuales en poblaciones infantiles de alto riesgo. La segunda se originó en las investigaciones sobre temperamento realizadas en USA en los años setenta. Para comprender la idea anglosajona de temperamento hay que pensar en 'tendencias a desarrollar la personalidad de una cierta manera' (Cyrulnik, 2008: 43). La tercera línea tuvo su origen en la observación de las distintas formas en que las personas enfrentan las experiencias vitales (Becoña, 2006). La primera generación de investigaciones eran consistentes entre sí sugiriendo la poderosa influencia del proceso adaptativo común y el interjuego de genes y experiencia en el desarrollo infantil. (Masten y Gewirtz, 2006) Uno de estos estudios pioneros fue realizado por Werner y Smith con 698 niños nacidos en Kauai (Hawai) en 1955. La totalidad de la población estudiada estaba en condiciones de riesgo pero aproximadamente un tercio estaba sujeto a múltiples factores de alto riesgo, a saber: pobreza, discordia parental, psicopatología parental y estrés perinatal. El seguimiento de la cohorte se realizó hasta los 40 años. Uno de los hallazgos fue que muchos de los jóvenes del subgrupo de alto riesgo que habían desarrollado problemas en la adolescencia se habían convertido en adultos con relaciones estables y satisfactorias en la familia y el trabajo. Solamente uno de cada seis adultos manifestaba problemas de diversa índole: pobreza, conflictos domésticos, violencia, abuso de sustancias, problemas de salud mental y baja autoestima. (Benard, 2004) Otra investigación seminal sobre resiliencia surgió de la búsqueda de las causas de la enfermedad mental. Los investigadores se concentraron en los hijos de padres mentalmente enfermos y advirtieron que muchos de estos niños se desarrollaban bien y no presentaban problemas de salud mental. Siguieron una perspectiva integrativa y de colaboración entre los especialistas clínicos y del desarrollo infantil y elaboraron un programa completo de investigación sobre resiliencia que duró varias décadas. (Masten y Powell, 2003) Estos primeros estudios se centraron en las cualidades de los niños resilientes, consideradas como atributos de los propios niños, solo posteriormente se observó la relación con características de las familias y sus comunidades de pertenencia. (Kotliarenco, Cáceres, y Fontecilla, 1997). Durante bastante tiempo se pensó que resiliencia era equivalente a invulnerabilidad y si bien desde la década del setenta dejó de utilizarse este término, aún hoy se considera que resiliencia y vulnerabilidad son los polos opuestos de un mismo continuo. Así encontramos en revisiones recientes que 'La vulnerabilidad se refiere a incrementar la probabilidad de un resultado negativo, típicamente como un resultado de la exposición al riesgo. La resiliencia se refiere a evitar los problemas asociados con ser vulnerable' aunque se admite en forma generalizada que este concepto se utiliza para referirse a 'un positivo y efectivo afrontamiento en respuesta al riesgo o a la adversidad'. (Becoña.2006:131). En sentido amplio, la vulnerabilidad afecta a cualquier sistema con un mínimo de organización sea éste natural, artificial o social. Cualquier análisis epistemológico de este concepto debe comenzar reconociendo que la diversidad de criterios responde a las diferentes unidades de análisis que recortan los investigadores y que sus definiciones dependen de los elementos articuladores que toman en consideración en cada dominio. En nuestro caso, - una investigación epidemiológica sobre salud mental infantil- partimos de dos supuestos básicos: a) la cualidad de vulnerable es una condición de todos los seres humanos pero no alcanza a todos por igual ni de la misma manera y b) toda vulnerabilidad es vulnerabilidad psicosocial dado que impacta de modo directo o indirecto sobre los sujetos en estudio. Sin embargo, el examen sería incompleto sino no se despeja previamente vulnerabilidad psicosocial de los conceptos de resiliencia y trauma con los que se lo relaciona en salud y educación. Algunas lecturas simplificadoras entienden que resiliencia es el resultado de la sumatoria de factores protectores mientras que vulnerabilidad es la sumatoria de los factores de riesgo. Dada la abundancia de investigaciones sobre estas temáticas, nos limitaremos a realizar una breve aproximación conceptual a cada uno de ellos y sus vinculaciones