809 resultados para New Media and Publication Design Program
Resumo:
From Bush’s September 20, 2001 “War on Terror” speech to Congress to President-Elect Barack Obama’s acceptance speech on November 4, 2008, the U.S. Army produced visual recruitment material that addressed the concerns of falling enlistment numbers—due to the prolonged and difficult war in Iraq—with quickly-evolving and compelling rhetorical appeals: from the introduction of an “Army of One” (2001) to “Army Strong” (2006); from messages focused on education and individual identity to high-energy adventure and simulated combat scenarios, distributed through everything from printed posters and music videos to first-person tactical-shooter video games. These highly polished, professional visual appeals introduced to the American public during a time of an unpopular war fought by volunteers provide rich subject matter for research and analysis. This dissertation takes a multidisciplinary approach to the visual media utilized as part of the Army’s recruitment efforts during the War on Terror, focusing on American myths—as defined by Barthes—and how these myths are both revealed and reinforced through design across media platforms. Placing each selection in its historical context, this dissertation analyzes how printed materials changed as the War on Terror continued. It examines the television ad that introduced “Army Strong” to the American public, considering how the combination of moving image, text, and music structure the message and the way we receive it. This dissertation also analyzes the video game America’s Army, focusing on how the interaction of the human player and the computer-generated player combine to enhance the persuasive qualities of the recruitment message. Each chapter discusses how the design of the particular medium facilitates engagement/interactivity of the viewer. The conclusion considers what recruitment material produced during this time period suggests about the persuasive strategies of different media and how they create distinct relationships with their spectators. It also addresses how theoretical frameworks and critical concepts used by a variety of disciplines can be combined to analyze recruitment media utilizing a Selber inspired three literacy framework (functional, critical, rhetorical) and how this framework can contribute to the multimodal classroom by allowing instructors and students to do a comparative analysis of multiple forms of visual media with similar content.
Resumo:
Students are now involved in a vastly different textual landscape than many English scholars, one that relies on the “reading” and interpretation of multiple channels of simultaneous information. As a response to these new kinds of literate practices, my dissertation adds to the growing body of research on multimodal literacies, narratology in new media, and rhetoric through an examination of the place of video games in English teaching and research. I describe in this dissertation a hybridized theoretical basis for incorporating video games in English classrooms. This framework for textual analysis includes elements from narrative theory in literary study, rhetorical theory, and literacy theory, and when combined to account for the multiple modalities and complexities of gaming, can provide new insights about those theories and practices across all kinds of media, whether in written texts, films, or video games. In creating this framework, I hope to encourage students to view texts from a meta-level perspective, encompassing textual construction, use, and interpretation. In order to foster meta-level learning in an English course, I use specific theoretical frameworks from the fields of literary studies, narratology, film theory, aural theory, reader-response criticism, game studies, and multiliteracies theory to analyze a particular video game: World of Goo. These theoretical frameworks inform pedagogical practices used in the classroom for textual analysis of multiple media. Examining a video game from these perspectives, I use analytical methods from each, including close reading, explication, textual analysis, and individual elements of multiliteracies theory and pedagogy. In undertaking an in-depth analysis of World of Goo, I demonstrate the possibilities for classroom instruction with a complex blend of theories and pedagogies in English courses. This blend of theories and practices is meant to foster literacy learning across media, helping students develop metaknowledge of their own literate practices in multiple modes. Finally, I outline a design for a multiliteracies course that would allow English scholars to use video games along with other texts to interrogate texts as systems of information. In doing so, students can hopefully view and transform systems in their own lives as audiences, citizens, and workers.
Resumo:
This dissertation addresses the need for a strategy that will help readers new to new media texts interpret such texts. While scholars in multimodal and new media theory posit rubrics that offer ways to understand how designers use the materialities and media found in overtly designed, new media texts (see, e.g,, Wysocki, 2004a), these strategies do not account for how readers have to make meaning from those texts. In this dissertation, I discuss how these theories, such as Lev Manovich’s (2001) five principles for determining the new media potential of texts and Gunther Kress and Theo van Leeuwen’s (2001) four strata of designing multimodal texts, are inadequate to the job of helping readers understand new media from a rhetorical perspective. I also explore how literary theory, specifically Wolfgang Iser’s (1978) description of acts of interpretation, can help audiences understand why readers are often unable to interpret the multiple, unexpected modes of communication used in new media texts. Rhetorical theory, explored in a discussion of Sonja Foss’s (2004) units of analysis, is helpful in bringing the reader into a situated context with a new media text, although these units of analysis, like Iser’s process, suggests that a reader has some prior experience interpreting a text-as-artifact. Because of this assumption of knowledge put forth by all of the theories explored within, I argue that none alone is useful to help readers engage with and interpret new media texts. However, I argue that a heuristic which combines elements from each of these theories, as well as additional ones, is more useful for readers who are new to interpreting the multiple modes of communication that are often used in unconventional ways in new media texts. I describe that heuristic in the final chapter and discuss how it can be useful to a range of texts besides those labelled new media.
Resumo:
Moisture induced distresses have been the prevalent distress type affecting the deterioration of both asphalt and concrete pavement sections. While various surface techniques have been employed over the years to minimize the ingress of moisture into the pavement structural sections, subsurface drainage components like open-graded base courses remain the best alternative in minimizing the time the pavement structural sections are exposed to saturated conditions. This research therefore focuses on assessing the performance and cost-effectiveness of pavement sections containing both treated and untreated open-graded aggregate base materials. Three common roadway aggregates comprising of two virgin aggregates and one recycled aggregate were investigated using four open-ended gradations and two binder types. Laboratory tests were conducted to determine the hydraulic, mechanical and durability characteristics of treated and untreated open-graded mixes made from these three aggregate types. Results of the experimental program show that for the same gradation and mix design types, limestone samples have the greatest drainage capacity, stability to traffic loads and resistance to degradation from environmental conditions like freeze-thaw. However, depending on the gradation and mix design used, all three aggregate types namely limestone, natural gravel and recycled concrete can meet the minimum coefficient of hydraulic conductivity required for good drainage in most pavements. Tests results for both asphalt and cement treated open-graded samples indicate that a percent air void content within the range of 15-25 will produce a treated open-graded base course with sufficient drainage capacity and also long term stability under both traffic and environmental loads. Using the new Mechanistic and Empirical Design Guide software, computer simulations of pavement performance were conducted on pavement sections containing these open-graded base aggregate base materials to determine how the MEPDG predicted pavement performance is sensitive to drainage. Using three truck traffic levels and four climatic regions, results of the computer simulations indicate that the predicted performance was not sensitive to the drainage characteristics of the open-graded base course. Based on the result of the MEPDG predicted pavement performance, the cost-effectiveness of the pavement sections with open-graded base was computed on the assumption that the increase service life experienced by these sections was attributed to the positive effects of subsurface drainage. The two cost analyses used gave two contrasting results with the one indicating that the inclusion of open-graded base courses can lead to substantial savings.
Resumo:
Mobile Mesh Network based In-Transit Visibility (MMN-ITV) system facilitates global real-time tracking capability for the logistics system. In-transit containers form a multi-hop mesh network to forward the tracking information to the nearby sinks, which further deliver the information to the remote control center via satellite. The fundamental challenge to the MMN-ITV system is the energy constraint of the battery-operated containers. Coupled with the unique mobility pattern, cross-MMN behavior, and the large-spanned area, it is necessary to investigate the energy-efficient communication of the MMN-ITV system thoroughly. First of all, this dissertation models the energy-efficient routing under the unique pattern of the cross-MMN behavior. A new modeling approach, pseudo-dynamic modeling approach, is proposed to measure the energy-efficiency of the routing methods in the presence of the cross-MMN behavior. With this approach, it could be identified that the shortest-path routing and the load-balanced routing is energy-efficient in mobile networks and static networks respectively. For the MMN-ITV system with both mobile and static MMNs, an energy-efficient routing method, energy-threshold routing, is proposed to achieve the best tradeoff between them. Secondly, due to the cross-MMN behavior, neighbor discovery is executed frequently to help the new containers join the MMN, hence, consumes similar amount of energy as that of the data communication. By exploiting the unique pattern of the cross-MMN behavior, this dissertation proposes energy-efficient neighbor discovery wakeup schedules to save up to 60% of the energy for neighbor discovery. Vehicular Ad Hoc Networks (VANETs)-based inter-vehicle communications is by now growingly believed to enhance traffic safety and transportation management with low cost. The end-to-end delay is critical for the time-sensitive safety applications in VANETs, and can be a decisive performance metric for VANETs. This dissertation presents a complete analytical model to evaluate the end-to-end delay against the transmission range and the packet arrival rate. This model illustrates a significant end-to-end delay increase from non-saturated networks to saturated networks. It hence suggests that the distributed power control and admission control protocols for VANETs should aim at improving the real-time capacity (the maximum packet generation rate without causing saturation), instead of the delay itself. Based on the above model, it could be determined that adopting uniform transmission range for every vehicle may hinder the delay performance improvement, since it does not allow the coexistence of the short path length and the low interference. Clusters are proposed to configure non-uniform transmission range for the vehicles. Analysis and simulation confirm that such configuration can enhance the real-time capacity. In addition, it provides an improved trade off between the end-to-end delay and the network capacity. A distributed clustering protocol with minimum message overhead is proposed, which achieves low convergence time.
Resumo:
After long deliberations, the European Community (EC) has completed the reform of its audiovisual media regulation. The paper examines the main tenets of this reform with particular focus on its implications for the diversity of cultural expressions in the European media landscape. It also takes into account the changed patterns of consumer and business behaviour due to the advances in digital media and their wider spread in society. The paper criticises the somewhat unimaginative approach of the EC to new media and the political (and at times protectionist) considerations behind some of the Directive's provisions.
Resumo:
Much has happened in the past fifty years, and the broadcasting system and in fact the entire media landscape have changed in many significant ways. Yet, the debate on the role of public service media and the involvement of the state in them still perseveres. It has indeed been reinvigorated due to the tectonic shifts in media production, distribution, access and consumption caused by digital technologies in general, and the Internet in particular. The gist of the debates has however curiously remained almost the same and is still focused on a set of economic arguments that call for state intervention in public media, and not unimportantly, on the various political interpretations of these economic arguments. In Europe, the debate has another essential core too, as Public Service Broadcasting (PSB) has been traditionally entrusted to serve some higher goals intrinsically related to key democratic and cultural processes. Accordingly, PSB in Western Europe has developed as the core media institution at the national level and has become deeply embedded in many facets of the nation’s economic, political, social and cultural life. Against the backdrop of PSB’s history, its vital tasks in society, as well as the dramatic changes brought about by the digitally networked environment, the question on the future of PSB and its transition into Public Service Media (PSM) is very interesting, to say the least, and highly challenging at the same time. The book by Karen Donders, Public Service Media and Policy in Europe (Palgrave, 2012), makes an essential contribution to these complex debates, and more importantly, adds some new value to an otherwise saturated discourse.
Resumo:
IMPORTANCE The discontinuation of randomized clinical trials (RCTs) raises ethical concerns and often wastes scarce research resources. The epidemiology of discontinued RCTs, however, remains unclear. OBJECTIVES To determine the prevalence, characteristics, and publication history of discontinued RCTs and to investigate factors associated with RCT discontinuation due to poor recruitment and with nonpublication. DESIGN AND SETTING Retrospective cohort of RCTs based on archived protocols approved by 6 research ethics committees in Switzerland, Germany, and Canada between 2000 and 2003. We recorded trial characteristics and planned recruitment from included protocols. Last follow-up of RCTs was April 27, 2013. MAIN OUTCOMES AND MEASURES Completion status, reported reasons for discontinuation, and publication status of RCTs as determined by correspondence with the research ethics committees, literature searches, and investigator surveys. RESULTS After a median follow-up of 11.6 years (range, 8.8-12.6 years), 253 of 1017 included RCTs were discontinued (24.9% [95% CI, 22.3%-27.6%]). Only 96 of 253 discontinuations (37.9% [95% CI, 32.0%-44.3%]) were reported to ethics committees. The most frequent reason for discontinuation was poor recruitment (101/1017; 9.9% [95% CI, 8.2%-12.0%]). In multivariable analysis, industry sponsorship vs investigator sponsorship (8.4% vs 26.5%; odds ratio [OR], 0.25 [95% CI, 0.15-0.43]; P < .001) and a larger planned sample size in increments of 100 (-0.7%; OR, 0.96 [95% CI, 0.92-1.00]; P = .04) were associated with lower rates of discontinuation due to poor recruitment. Discontinued trials were more likely to remain unpublished than completed trials (55.1% vs 33.6%; OR, 3.19 [95% CI, 2.29-4.43]; P < .001). CONCLUSIONS AND RELEVANCE In this sample of trials based on RCT protocols from 6 research ethics committees, discontinuation was common, with poor recruitment being the most frequently reported reason. Greater efforts are needed to ensure the reporting of trial discontinuation to research ethics committees and the publication of results of discontinued trials.
Resumo:
Although it is known that tumor necrosis factor receptor (TNFR) signaling plays a crucial role in vascular integrity and homeostasis, the contribution of each receptor to these processes and the signaling pathway involved are still largely unknown. Here, we show that targeted gene knockdown of TNFRSF1B in zebrafish embryos results in the induction of a caspase-8, caspase-2 and P53-dependent apoptotic program in endothelial cells that bypasses caspase-3. Furthermore, the simultaneous depletion of TNFRSF1A or the activation of NF-κB rescue endothelial cell apoptosis, indicating that a signaling balance between both TNFRs is required for endothelial cell integrity. In endothelial cells, TNFRSF1A signals apoptosis through caspase-8, whereas TNFRSF1B signals survival via NF-κB. Similarly, TNFα promotes the apoptosis of human endothelial cells through TNFRSF1A and triggers caspase-2 and P53 activation. We have identified an evolutionarily conserved apoptotic pathway involved in vascular homeostasis that provides new therapeutic targets for the control of inflammation- and tumor-driven angiogenesis.
Resumo:
Objective. This research study had two goals: (1) to describe resource consumption patterns for Medi-Cal children with cystic fibrosis, and (2) to explore the feasibility from a rate design perspective of developing specialized managed care plans for such a special needs population.^ Background. Children with special health care needs (CSHN) comprise about 2% of the California Medicaid pediatric population. CSHN have rare but serious health problems, such as cystic fibrosis. Medicaid programs, including Medi-Cal, are enrolling more and more beneficiaries in managed care to control costs. CSHN, however, do not fit the wellness model underlying most managed care plans. Child health advocates believe that both efficiency and quality will suffer if CSHN are removed from regionalized special care centers and scattered among general purpose plans. They believe that CSHN should be "carved out" from enrollment in general plans. One alternative is the Specialized Managed Care Plan, tailored for CSHN.^ Methods. The study population consisted of children under age 21 with CF who were eligible for Medi-Cal and California Children's Services program (CCS) during 1991. Health Care Financing Administration (HCFA) Medicaid Tape-to-Tape data were analyzed as part of a California Children's Hospital Association (CCHA) project.^ Results. Mean Medi-Cal expenditures per month enrolled were $2,302 for 457 CF children, compared to about \$1,270 for all 47,000 CCS special needs children and roughly $60 for almost 2.6 million ``regular needs'' children. For CF children, inpatient care (80\%) and outpatient drugs (9\%) were the major cost drivers, with {\it all\/} outpatient visits comprising only 2\% of expenditures. About one-third of CF children were eligible due to AFDC (Aid to Families with Dependent Children). Age group explained about 17\% of all expenditure variation. Regression analysis was used to select the best capitation rate structure (rate cells by age and eligibility group). Sensitivity analysis estimated moderate financial risk for a statewide plan (360 enrollees), but severe risk for single county implementation due to small numbers of children.^ Conclusions. Study results support the carve out of CSHN due to unique expenditure patterns. The Specialized Managed Care Plan concept appears feasible from a rate design perspective given sufficient enrollees. ^
Resumo:
The relationship between serum cholesterol and cancer incidence was investigated in the population of the Hypertension Detection and Follow-up Program (HDFP). The HDFP was a multi-center trial designed to test the effectiveness of a stepped program of medication in reducing mortality associated with hypertension. Over 10,000 participants, ages 30-69, were followed with clinic and home visits for a minimum of five years. Cancer incidence was ascertained from existing study documents, which included hospitalization records, autopsy reports and death certificates. During the five years of follow-up, 286 new cancer cases were documented. The distribution of sites and total number of cases were similar to those predicted using rates from the Third National Cancer Survey. A non-fasting baseline serum cholesterol level was available for most participants. Age, sex, and race specific five-year cancer incidence rates were computed for each cholesterol quartile. Rates were also computed by smoking status, education status, and percent ideal weight quartiles. In addition, these and other factors were investigated with the use of the multiple logistic model.^ For all cancers combined, a significant inverse relationship existed between baseline serum cholesterol levels and cancer incidence. Previously documented associations between smoking, education and cancer were also demonstrated but did not account for the relationship between serum cholesterol and cancer. The relationship was more evident in males than females but this was felt to represent the different distribution of occurrence of specific cancer sites in the two sexes. The inverse relationship existed for all specific sites investigated (except breast) although a level of statistical significance was reached only for prostate carcinoma. Analyses after exclusion of cases diagnosed during the first two years of follow-up still yielded an inverse relationship. Life table analysis indicated that competing risks during the period of follow-up did not account for the existence of an inverse relationship. It is concluded that a weak inverse relationship does exist between serum cholesterol for many but not all cancer sites. This relationship is not due to confounding by other known cancer risk factors, competing risks or persons entering the study with undiagnosed cancer. Not enough information is available at the present time to determine whether this relationship is causal and further research is suggested. ^
Resumo:
Acritarchs have received limited attention in palynological studies of the Cenozoic, although they have much potential both for refining Neogene and Quaternary stratigraphy, especially in mid- and high northern latitudes, and developing palaeoceanographical reconstructions. Here we formally describe and document the stratigraphical and palaeotemperature ranges (from foraminiferal Mg/Ca) of four new acritarch species: Cymatiosphaera? aegirii sp. nov., Cymatiosphaera? fensomei sp. nov., Cymatiosphaera? icenorum sp. nov. and Lavradosphaera canalis sp. nov. In reviewing the stratigraphical distributions of all species of the genus Lavradosphaera De Schepper & Head, 2008, we demonstrate their correlation potential between the North Atlantic and Bering Sea in the Pliocene. Additionally, Lavradosphaera lucifer De Schepper & Head, 2008 and Lavradosphaera canalis sp. nov., while not themselves overlapping stratigraphically, have morphological intermediates that do partially overlap and may represent an evolutionary trend consequent upon climate cooling in the Late Pliocene. Finally, we show that the highest abundances of the acritarchs presented here were living in the eastern North Atlantic, in surface-water temperatures not very different from today.
Resumo:
Introduction and motivation: A wide variety of organisms have developed in-ternal biomolecular clocks in order to adapt to cyclic changes of the environment. Clock operation involves genetic networks. These genetic networks have to be mod¬eled in order to understand the underlying mechanism of oscillations and to design new synthetic cellular clocks. This doctoral thesis has resulted in two contributions to the fields of genetic clocks and systems and synthetic biology, generally. The first contribution is a new genetic circuit model that exhibits an oscillatory behav¬ior through catalytic RNA molecules. The second and major contribution is a new genetic circuit model demonstrating that a repressor molecule acting on the positive feedback of a self-activating gene produces reliable oscillations. First contribution: A new model of a synthetic genetic oscillator based on a typical two-gene motif with one positive and one negative feedback loop is pre¬sented. The originality is that the repressor is a catalytic RNA molecule rather than a protein or a non-catalytic RNA molecule. This catalytic RNA is a ribozyme that acts post-transcriptionally by binding to and cleaving target mRNA molecules. This genetic clock involves just two genes, a mRNA and an activator protein, apart from the ribozyme. Parameter values that produce a circadian period in both determin¬istic and stochastic simulations have been chosen as an example of clock operation. The effects of the stochastic fluctuations are quantified by a period histogram and autocorrelation function. The conclusion is that catalytic RNA molecules can act as repressor proteins and simplify the design of genetic oscillators. Second and major contribution: It is demonstrated that a self-activating gene in conjunction with a simple negative interaction can easily produce robust matically validated. This model is comprised of two clearly distinct parts. The first is a positive feedback created by a protein that binds to the promoter of its own gene and activates the transcription. The second is a negative interaction in which a repressor molecule prevents this protein from binding to its promoter. A stochastic study shows that the system is robust to noise. A deterministic study identifies that the oscillator dynamics are mainly driven by two types of biomolecules: the protein, and the complex formed by the repressor and this protein. The main conclusion of this study is that a simple and usual negative interaction, such as degradation, se¬questration or inhibition, acting on the positive transcriptional feedback of a single gene is a sufficient condition to produce reliable oscillations. One gene is enough and the positive transcriptional feedback signal does not need to activate a second repressor gene. At the genetic level, this means that an explicit negative feedback loop is not necessary. Unlike many genetic oscillators, this model needs neither cooperative binding reactions nor the formation of protein multimers. Applications and future research directions: Recently, RNA molecules have been found to play many new catalytic roles. The first oscillatory genetic model proposed in this thesis uses ribozymes as repressor molecules. This could provide new synthetic biology design principles and a better understanding of cel¬lular clocks regulated by RNA molecules. The second genetic model proposed here involves only a repression acting on a self-activating gene and produces robust oscil¬lations. Unlike current two-gene oscillators, this model surprisingly does not require a second repressor gene. This result could help to clarify the design principles of cellular clocks and constitute a new efficient tool for engineering synthetic genetic oscillators. Possible follow-on research directions are: validate models in vivo and in vitro, research the potential of second model as a genetic memory, investigate new genetic oscillators regulated by non-coding RNAs and design a biosensor of positive feedbacks in genetic networks based on the operation of the second model Resumen Introduccion y motivacion: Una amplia variedad de organismos han desarro-llado relojes biomoleculares internos con el fin de adaptarse a los cambios ciclicos del entorno. El funcionamiento de estos relojes involucra redes geneticas. El mo delado de estas redes geneticas es esencial tanto para entender los mecanismos que producen las oscilaciones como para diseiiar nuevos circuitos sinteticos en celulas. Esta tesis doctoral ha dado lugar a dos contribuciones dentro de los campos de los circuitos geneticos en particular, y biologia de sistemas y sintetica en general. La primera contribucion es un nuevo modelo de circuito genetico que muestra un comportamiento oscilatorio usando moleculas de ARN cataliticas. La segunda y principal contribucion es un nuevo modelo de circuito genetico que demuestra que una molecula represora actuando sobre el lazo de un gen auto-activado produce oscilaciones robustas. Primera contribucion: Es un nuevo modelo de oscilador genetico sintetico basado en una tipica red genetica compuesta por dos genes con dos lazos de retroa-limentacion, uno positivo y otro negativo. La novedad de este modelo es que el represor es una molecula de ARN catalftica, en lugar de una protefna o una molecula de ARN no-catalitica. Este ARN catalitico es una ribozima que actua despues de la transcription genetica uniendose y cortando moleculas de ARN mensajero (ARNm). Este reloj genetico involucra solo dos genes, un ARNm y una proteina activadora, aparte de la ribozima. Como ejemplo de funcionamiento, se han escogido valores de los parametros que producen oscilaciones con periodo circadiano (24 horas) tanto en simulaciones deterministas como estocasticas. El efecto de las fluctuaciones es-tocasticas ha sido cuantificado mediante un histograma del periodo y la función de auto-correlacion. La conclusion es que las moleculas de ARN con propiedades cataliticas pueden jugar el misnio papel que las protemas represoras, y por lo tanto, simplificar el diseno de los osciladores geneticos. Segunda y principal contribucion: Es un nuevo modelo de oscilador genetico que demuestra que un gen auto-activado junto con una simple interaction negativa puede producir oscilaciones robustas. Este modelo ha sido estudiado y validado matematicamente. El modelo esta compuesto de dos partes bien diferenciadas. La primera parte es un lazo de retroalimentacion positiva creado por una proteina que se une al promotor de su propio gen activando la transcription. La segunda parte es una interaction negativa en la que una molecula represora evita la union de la proteina con el promotor. Un estudio estocastico muestra que el sistema es robusto al ruido. Un estudio determinista muestra que la dinamica del sistema es debida principalmente a dos tipos de biomoleculas: la proteina, y el complejo formado por el represor y esta proteina. La conclusion principal de este estudio es que una simple y usual interaction negativa, tal como una degradation, un secuestro o una inhibition, actuando sobre el lazo de retroalimentacion positiva de un solo gen es una condition suficiente para producir oscilaciones robustas. Un gen es suficiente y el lazo de retroalimentacion positiva no necesita activar a un segundo gen represor, tal y como ocurre en los relojes actuales con dos genes. Esto significa que a nivel genetico un lazo de retroalimentacion negativa no es necesario de forma explicita. Ademas, este modelo no necesita reacciones cooperativas ni la formation de multimeros proteicos, al contrario que en muchos osciladores geneticos. Aplicaciones y futuras lineas de investigacion: En los liltimos anos, se han descubierto muchas moleculas de ARN con capacidad catalitica. El primer modelo de oscilador genetico propuesto en esta tesis usa ribozimas como moleculas repre¬soras. Esto podria proporcionar nuevos principios de diseno en biologia sintetica y una mejor comprension de los relojes celulares regulados por moleculas de ARN. El segundo modelo de oscilador genetico propuesto aqui involucra solo una represion actuando sobre un gen auto-activado y produce oscilaciones robustas. Sorprendente-mente, un segundo gen represor no es necesario al contrario que en los bien conocidos osciladores con dos genes. Este resultado podria ayudar a clarificar los principios de diseno de los relojes celulares naturales y constituir una nueva y eficiente he-rramienta para crear osciladores geneticos sinteticos. Algunas de las futuras lineas de investigation abiertas tras esta tesis son: (1) la validation in vivo e in vitro de ambos modelos, (2) el estudio del potential del segundo modelo como circuito base para la construction de una memoria genetica, (3) el estudio de nuevos osciladores geneticos regulados por ARN no codificante y, por ultimo, (4) el rediseno del se¬gundo modelo de oscilador genetico para su uso como biosensor capaz de detectar genes auto-activados en redes geneticas.