958 resultados para Recursive programming
Resumo:
Epidemiological studies have led to the hypothesis that major risk factors for developing diseases such as hypertension, cardiovascular disease and adult-onset diabetes are established during development. This developmental programming hypothesis proposes that exposure to an adverse stimulus or insult at critical, sensitive periods of development can induce permanent alterations in normal physiological processes that lead to increased disease risk later in life. For cancer, inheritance of a tumor suppressor gene defect confers a high relative risk for disease development. However, these defects are rarely 100% penetrant. Traditionally, gene-environment interactions are thought to contribute to the penetrance of tumor suppressor gene defects by facilitating or inhibiting the acquisition of additional somatic mutations required for tumorigenesis. The studies presented herein identify developmental programming as a distinctive type of gene-environment interaction that can enhance the penetrance of a tumor suppressor gene defect in adult life. Using rats predisposed to uterine leiomyoma due to a germ-line defect in one allele of the tuberous sclerosis complex 2 (Tsc-2) tumor suppressor gene, these studies show that early-life exposure to the xenoestrogen, diethylstilbestrol (DES), during development of the uterus increased tumor incidence, multiplicity and size in genetically predisposed animals, but failed to induce tumors in wild-type rats. Uterine leiomyomas are ovarian-hormone dependent tumors that develop from the uterine myometrium. DES exposure was shown to developmentally program the myometrium, causing increased expression of estrogen-responsive genes prior to the onset of tumors. Loss of function of the normal Tsc-2 allele remained the rate-limiting event for tumorigenesis; however, tumors that developed in exposed animals displayed an enhanced proliferative response to ovarian steroid hormones relative to tumors that developed in unexposed animals. Furthermore, the studies presented herein identify developmental periods during which target tissues are maximally susceptible to developmental programming. These data suggest that exposure to environmental factors during critical periods of development can permanently alter normal physiological tissue responses and thus lead to increased disease risk in genetically susceptible individuals. ^
Resumo:
Childhood overweight and obesity are two major public health problems that are of economic and medical concern in the world today (Lobstein, Baur, & Uauy, 2004). Overweight conditions in childhood are important because they are widely prevalent, serious, and carry lifetime consequences for health and well being (Lobstein et al., 2004). Several studies have shown an association between television viewing and obesity in all age groups (Caroli, Argentieri, Cardone, & Masi, 2004; Harper, 2006; Vandewater & Huang, 2006; Wiecha et al., 2006). One mechanism that potentially links television viewing to childhood obesity is food advertising (Story, 2003). ^ The purpose of this study was to examine the types of foods advertised on children's television programming and to determine if there have been any changes in the number and types of commercials over the last 13 years. In addition, the food content of the advertisements was compared to the 2005 Dietary Guidelines to determine if the foods targeted were consistent with the current recommendations. Finally, each television network was analyzed individually to determine any differences between advertising on cable and regular programming. ^ A descriptive analysis was conducted on the most commonly advertised commercials during children's television programming on Saturday morning from 7 a.m. to 10:30 a.m. A total of 10 major television networks were viewed on three different Saturday mornings during June and July 2007. Commercial advertising accounted for approximately 19% of children's total viewing time. Of the 3,185 commercials, 28.5% were for foods, 67.7% were for non-food items, and 3.8% were PSAs. On average, there were 30 commercial advertisements and PSAs per hour, of which approximately nine were for food. ^ Of the 907 food advertisements, 72.0% were for foods classified in the fats, oils, and sugar group. The next largest group (17.3%) was for restaurant food of which 15.3% were for unhealthy/fast food restaurant fare. The most frequently advertised food product on Saturday morning television was regular cereal, accounting for 43.9% of all food advertisements. ^ Cable and regular programming stations varied slightly in the amount, length, and category of commercials. Cable television had about 50% less commercials and PSAs (1098) than regular programming (2087), but only had approximately 150 minutes less total commercial and PSA time; therefore, cable, in general, had longer commercials than regular programming. Overall, cable programming had more advertisements encouraging increased physical activity and positive nutrition behavior with less commercials focusing on the fats, oils, and sugar groups, compared to regular programming. ^ During the last 13 years, food advertisements have not improved, despite the recent IOM report on marketing foods to children (Institute of Medicine-Committee on Food Marketing and the Diets of Children and Youth, 2005), although the frequency of food advertisements has improved slightly. Children are now viewing an average of one food advertisement every 7 minutes, compared to one food advertisement every 5 minutes in 1994 (Kotz & Story, 1994). Therefore, manufacturers are putting a greater emphasis on advertising other products to children. Despite the recent attention to the issue of marketing unhealthy foods to children through television advertisements, not much progress has been noted since 1994. Further advocacy and regulatory issues concerning the content of advertisements during Saturday morning TV need to be explored. ^
Resumo:
Community-based participatory research necessitates that community members act as partners in decision making and mutual learning and discovery. In the same light, for programs/issues involving youth, youth should be partners in knowledge sharing and evaluation (Checkoway & Richards-Schuster, 2004). This study is a youth-focused empowerment evaluation for the Successful Youth program. Successful Youth is a multi-component youth development after-school program for Latino middle school youth, created with the goal of reducing teen pregnancy. An empowerment evaluation is collaborative and participatory (Balcazar and Harper 2003). The three steps of an empowerment evaluation are: (1) defining mission, (2) taking stock, and (3) planning for the future (Fetterman 2001).^ In a program where youth are developing leadership skills, making choices, and learning how to self reflect and evaluate, the empowerment evaluation could not be more aligned with promoting and enhancing these skills. In addition, an empowerment evaluation is designed to "foster improvement and self-determination" and "build capacity" (Fetterman 2001). Four empowerment groups were conducted with approximately 6-9 Latino 7th grade students per group. All participants were enrolled in the Successful Youth program. Results indicate points where students' perceptions of the program were aligned with the program's mission and where gaps were identified. Students offered recommendations for program improvements. Additionally, students enjoyed expressing their feelings about the program and appreciated that their opinions were valued. Youth recommendations will be brought to program staff; and, where possible, gaps will be addressed. Empowerment evaluations with youth will continue during the duration of the program so that youth involvement and input remains integral in the evaluation and to ascertain whether the program's goals are being met. ^
Resumo:
The investigator conducted an action-oriented investigation of pregnancy and birth among the women of Mesa los Hornos, an urban squatter slum in Mexico City. Three aims guided the project: (1) To obtain information for improving prenatal and maternity service utilization; (2) To examine the utility of rapid ethnographic and epidemiologic assessment methodologies; (3) To cultivate community involvement in health development.^ Viewing service utilization as a culturally-bound decision, the study included a qualitative phase to explore women's cognition of pregnancy and birth, their perceived needs during pregnancy, and their criteria of service acceptability. A probability-based community survey delineated parameters of service utilization and pregnancy health events, and probed reasons for decisions to use medical services, lay midwives, or other sources of prenatal and labor and delivery assistance. Qualitative survey of service providers at relevant clinics, hospitals, and practices contributed information on service availability and access, and on coordination among private, social security, and public assistance health service sectors. The ethnographic approach to exploring the rationale for use or non-use of services provided a necessary complement to conventional barrier-based assessment, to inform planning of culturally appropriate interventions.^ Information collection and interpretation was conducted under the aegis of an advisory committee of community residents and service agency representatives; the residents' committee formulated recommendations for action based on findings, and forwarded the mandate to governmental social and urban development offices. Recommendations were designed to inform and develop community participation in health care decision-making.^ Rapid research methods are powerful tools for achieving community-based empowerment toward investigation and resolution of local health problems. But while ethnography works well in synergy with quantitative assessment approaches to strengthen the validity and richness of short-term field work, the author strongly urges caution in application of Rapid Ethnographic Assessments. An ethnographic sensibility is essential to the research enterprise for the development of an active and cooperative community base, the design and use of quantitative instruments, the appropriate use of qualitative techniques, and the interpretation of culturally-oriented information. However, prescribed and standardized Rapid Ethnographic Assessment techniques are counter-productive if used as research short-cuts before locale- and subject-specific cultural understanding is achieved. ^
Resumo:
Introduction: Both a systems approach to change and a focus on multi-sector interventions ensures obesity prevention programming within the community is equitable, sustainable, and cost-effective. An authentic community engagement approach is required to implement interventions guided by best-evidence research and practice. Although there are examples illustrating the benefits of community engagement, there is no standardized method to implement it. The San Antonio Sports Foundation (SA Sports), a non-profit community-based organization, implements a variety of free events and programs promoting active life styles. One such program is the Fit Family Challenge which is a summer-long program implemented at the school level targeted at families. ^ Aims: This thesis was a culmination of the experience from the student collaborating with SA Sports as part of a practicum opportunity. Using secondary data collected by the Fit Family Challenge during the 2011 year, the goals of this thesis were: to assess individual changes; evaluate short-term impact; and describe the community engagement process. ^ Methods: SA Sports collected quantitative and qualitative data during the implementation and evaluation of the FFC program. SA Sports allowed the used of de-identified data to be analyzed to study the aims of this thesis. ^ Results: The program was able to provide families with the knowledge, information, and opportunity to exercise as a family and cook healthier meals. School district coordinators were generally satisfied and illustrated the benefits of a community partnership. An authentic community engagement was present highlighting the importance of communication, collaboration and the sustainability of such partnerships in the community. ^ Conclusion: The success of an obesity program should focus on triggers that initiate behavioral change rather than physiological changes. The evaluation was guided by a community engagement approach, which illustrated the development of new partnerships and the strengthening of other collaborations. Ultimately, the engagement approach empowered the community to identify their own problems and build collaboration, rather than tackling obesity prevention alone. ^
Resumo:
Recently, vision-based advanced driver-assistance systems (ADAS) have received a new increased interest to enhance driving safety. In particular, due to its high performance–cost ratio, mono-camera systems are arising as the main focus of this field of work. In this paper we present a novel on-board road modeling and vehicle detection system, which is a part of the result of the European I-WAY project. The system relies on a robust estimation of the perspective of the scene, which adapts to the dynamics of the vehicle and generates a stabilized rectified image of the road plane. This rectified plane is used by a recursive Bayesian classi- fier, which classifies pixels as belonging to different classes corresponding to the elements of interest of the scenario. This stage works as an intermediate layer that isolates subsequent modules since it absorbs the inherent variability of the scene. The system has been tested on-road, in different scenarios, including varied illumination and adverse weather conditions, and the results have been proved to be remarkable even for such complex scenarios.
Resumo:
We show a method for parallelizing top down dynamic programs in a straightforward way by a careful choice of a lock-free shared hash table implementation and randomization of the order in which the dynamic program computes its subproblems. This generic approach is applied to dynamic programs for knapsack, shortest paths, and RNA structure alignment, as well as to a state-of-the-art solution for minimizing the máximum number of open stacks. Experimental results are provided on three different modern multicore architectures which show that this parallelization is effective and reasonably scalable. In particular, we obtain over 10 times speedup for 32 threads on the open stacks problem.
Resumo:
Se definen conceptos y se aplica el teorema de Valverde para escribir un algoritmo que computa bases de similaridades. This paper studies sorne theory and methods to build a representation theorem basis of a similarity from the basis of its subsimilarities, providing an alternative recursive method to compute the basis of a similarity.
Resumo:
This letter presents a novel recursive active filter topology that provides dual-band performance, with independent tuning capability in both bands. The dual-band operation is achieved by using two independent feedback lines. Additionally, linear phase shifters based on left-handed cells are included in these two branches in order to tune the center frequency of both pass bands.
Resumo:
In this paper we propose a new method for the automatic detection and tracking of road traffic signs using an on-board single camera. This method aims to increase the reliability of the detections such that it can boost the performance of any traffic sign recognition scheme. The proposed approach exploits a combination of different features, such as color, appearance, and tracking information. This information is introduced into a recursive Bayesian decision framework, in which prior probabilities are dynamically adapted to tracking results. This decision scheme obtains a number of candidate regions in the image, according to their HS (Hue-Saturation). Finally, a Kalman filter with an adaptive noise tuning provides the required time and spatial coherence to the estimates. Results have shown that the proposed method achieves high detection rates in challenging scenarios, including illumination changes, rapid motion and significant perspective distortion
Resumo:
El cálculo de relaciones binarias fue creado por De Morgan en 1860 para ser posteriormente desarrollado en gran medida por Peirce y Schröder. Tarski, Givant, Freyd y Scedrov demostraron que las álgebras relacionales son capaces de formalizar la lógica de primer orden, la lógica de orden superior así como la teoría de conjuntos. A partir de los resultados matemáticos de Tarski y Freyd, esta tesis desarrolla semánticas denotacionales y operacionales para la programación lógica con restricciones usando el álgebra relacional como base. La idea principal es la utilización del concepto de semántica ejecutable, semánticas cuya característica principal es el que la ejecución es posible utilizando el razonamiento estándar del universo semántico, este caso, razonamiento ecuacional. En el caso de este trabajo, se muestra que las álgebras relacionales distributivas con un operador de punto fijo capturan toda la teoría y metateoría estándar de la programación lógica con restricciones incluyendo los árboles utilizados en la búsqueda de demostraciones. La mayor parte de técnicas de optimización de programas, evaluación parcial e interpretación abstracta pueden ser llevadas a cabo utilizando las semánticas aquí presentadas. La demostración de la corrección de la implementación resulta extremadamente sencilla. En la primera parte de la tesis, un programa lógico con restricciones es traducido a un conjunto de términos relacionales. La interpretación estándar en la teoría de conjuntos de dichas relaciones coincide con la semántica estándar para CLP. Las consultas contra el programa traducido son llevadas a cabo mediante la reescritura de relaciones. Para concluir la primera parte, se demuestra la corrección y equivalencia operacional de esta nueva semántica, así como se define un algoritmo de unificación mediante la reescritura de relaciones. La segunda parte de la tesis desarrolla una semántica para la programación lógica con restricciones usando la teoría de alegorías—versión categórica del álgebra de relaciones—de Freyd. Para ello, se definen dos nuevos conceptos de Categoría Regular de Lawvere y _-Alegoría, en las cuales es posible interpretar un programa lógico. La ventaja fundamental que el enfoque categórico aporta es la definición de una máquina categórica que mejora e sistema de reescritura presentado en la primera parte. Gracias al uso de relaciones tabulares, la máquina modela la ejecución eficiente sin salir de un marco estrictamente formal. Utilizando la reescritura de diagramas, se define un algoritmo para el cálculo de pullbacks en Categorías Regulares de Lawvere. Los dominios de las tabulaciones aportan información sobre la utilización de memoria y variable libres, mientras que el estado compartido queda capturado por los diagramas. La especificación de la máquina induce la derivación formal de un juego de instrucciones eficiente. El marco categórico aporta otras importantes ventajas, como la posibilidad de incorporar tipos de datos algebraicos, funciones y otras extensiones a Prolog, a la vez que se conserva el carácter 100% declarativo de nuestra semántica. ABSTRACT The calculus of binary relations was introduced by De Morgan in 1860, to be greatly developed by Peirce and Schröder, as well as many others in the twentieth century. Using different formulations of relational structures, Tarski, Givant, Freyd, and Scedrov have shown how relation algebras can provide a variable-free way of formalizing first order logic, higher order logic and set theory, among other formal systems. Building on those mathematical results, we develop denotational and operational semantics for Constraint Logic Programming using relation algebra. The idea of executable semantics plays a fundamental role in this work, both as a philosophical and technical foundation. We call a semantics executable when program execution can be carried out using the regular theory and tools that define the semantic universe. Throughout this work, the use of pure algebraic reasoning is the basis of denotational and operational results, eliminating all the classical non-equational meta-theory associated to traditional semantics for Logic Programming. All algebraic reasoning, including execution, is performed in an algebraic way, to the point we could state that the denotational semantics of a CLP program is directly executable. Techniques like optimization, partial evaluation and abstract interpretation find a natural place in our algebraic models. Other properties, like correctness of the implementation or program transformation are easy to check, as they are carried out using instances of the general equational theory. In the first part of the work, we translate Constraint Logic Programs to binary relations in a modified version of the distributive relation algebras used by Tarski. Execution is carried out by a rewriting system. We prove adequacy and operational equivalence of the semantics. In the second part of the work, the relation algebraic approach is improved by using allegory theory, a categorical version of the algebra of relations developed by Freyd and Scedrov. The use of allegories lifts the semantics to typed relations, which capture the number of logical variables used by a predicate or program state in a declarative way. A logic program is interpreted in a _-allegory, which is in turn generated from a new notion of Regular Lawvere Category. As in the untyped case, program translation coincides with program interpretation. Thus, we develop a categorical machine directly from the semantics. The machine is based on relation composition, with a pullback calculation algorithm at its core. The algorithm is defined with the help of a notion of diagram rewriting. In this operational interpretation, types represent information about memory allocation and the execution mechanism is more efficient, thanks to the faithful representation of shared state by categorical projections. We finish the work by illustrating how the categorical semantics allows the incorporation into Prolog of constructs typical of Functional Programming, like abstract data types, and strict and lazy functions.
Resumo:
Irregular computations pose sorne of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures, which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. Starting in the mid 80s there has been significant progress in the development of parallelizing compilers for logic programming (and more recently, constraint programming) resulting in quite capable parallelizers. The typical applications of these paradigms frequently involve irregular computations, and make heavy use of dynamic data structures with pointers, since logical variables represent in practice a well-behaved form of pointers. This arguably makes the techniques used in these compilers potentially interesting. In this paper, we introduce in a tutoríal way, sorne of the problems faced by parallelizing compilers for logic and constraint programs and provide pointers to sorne of the significant progress made in the area. In particular, this work has resulted in a series of achievements in the areas of inter-procedural pointer aliasing analysis for independence detection, cost models and cost analysis, cactus-stack memory management, techniques for managing speculative and irregular computations through task granularity control and dynamic task allocation such as work-stealing schedulers), etc.
Resumo:
Compilation techniques such as those portrayed by the Warren Abstract Machine(WAM) have greatly improved the speed of execution of logic programs. The research presented herein is geared towards providing additional performance to logic programs through the use of parallelism, while preserving the conventional semantics of logic languages. Two áreas to which special attention is given are the preservation of sequential performance and storage efficiency, and the use of low overhead mechanisms for controlling parallel execution. Accordingly, the techniques used for supporting parallelism are efficient extensions of those which have brought high inferencing speeds to sequential implementations. At a lower level, special attention is also given to design and simulation detail and to the architectural implications of the execution model behavior. This paper offers an overview of the basic concepts and techniques used in the parallel design, simulation tools used, and some of the results obtained to date.
Resumo:
We report on a detailed study of the application and effectiveness of program analysis based on abstract interpretation to automatic program parallelization. We study the case of parallelizing logic programs using the notion of strict independence. We first propose and prove correct a methodology for the application in the parallelization task of the information inferred by abstract interpretation, using a parametric domain. The methodology is generic in the sense of allowing the use of different analysis domains. A number of well-known approximation domains are then studied and the transformation into the parametric domain defined. The transformation directly illustrates the relevance and applicability of each abstract domain for the application. Both local and global analyzers are then built using these domains and embedded in a complete parallelizing compiler. Then, the performance of the domains in this context is assessed through a number of experiments. A comparatively wide range of aspects is studied, from the resources needed by the analyzers in terms of time and memory to the actual benefits obtained from the information inferred. Such benefits are evaluated both in terms of the characteristics of the parallelized code and of the actual speedups obtained from it. The results show that data flow analysis plays an important role in achieving efficient parallelizations, and that the cost of such analysis can be reasonable even for quite sophisticated abstract domains. Furthermore, the results also offer significant insight into the characteristics of the domains, the demands of the application, and the trade-offs involved.