935 resultados para Power to decide process
Resumo:
AIM: The purpose of this randomized split-mouth clinical trial was to determine the active tactile sensibility between single-tooth implants and opposing natural teeth and to compare it with the tactile sensibility of pairs of natural teeth on the contralateral side in the same mouth (intraindividual comparison). MATERIAL AND METHODS: The hypothesis was that the active tactile sensibilities of the implant side and control side are equivalent. Sixty two subjects (n=36 from Bonn, n=26 from Bern) with single-tooth implants (22 anterior and 40 posterior dental implants) were asked to bite on narrow copper foil strips varying in thickness (5-200 microm) and to decide whether or not they were able to identify a foreign body between their teeth. Active tactile sensibility was defined as the 50% threshold of correct answers estimated by means of the Weibull distribution. RESULTS: The results obtained for the interocclusal perception sensibility differed between subjects far more than they differed between natural teeth and implants in the same individual [implant/natural tooth: 16.7+/-11.3 microm (0.6-53.1 microm); natural tooth/natural tooth: 14.3+/-10.6 microm (0.5-68.2 microm)]. The intraindividual differences only amounted to a mean value of 2.4+/-9.4 microm (-15.1 to 27.5 microm). The result of our statistical calculations showed that the active tactile sensibility of single-tooth implants, both in the anterior and posterior region of the mouth, in combination with a natural opposing tooth is similar to that of pairs of opposing natural teeth (double t-test, equivalence margin: +/-8 microm, P<0.001, power >80%). Hence, the implants could be integrated in the stomatognathic control circuit.
Resumo:
This dissertation has three separate parts: the first part deals with the general pedigree association testing incorporating continuous covariates; the second part deals with the association tests under population stratification using the conditional likelihood tests; the third part deals with the genome-wide association studies based on the real rheumatoid arthritis (RA) disease data sets from Genetic Analysis Workshop 16 (GAW16) problem 1. Many statistical tests are developed to test the linkage and association using either case-control status or phenotype covariates for family data structure, separately. Those univariate analyses might not use all the information coming from the family members in practical studies. On the other hand, the human complex disease do not have a clear inheritance pattern, there might exist the gene interactions or act independently. In part I, the new proposed approach MPDT is focused on how to use both the case control information as well as the phenotype covariates. This approach can be applied to detect multiple marker effects. Based on the two existing popular statistics in family studies for case-control and quantitative traits respectively, the new approach could be used in the simple family structure data set as well as general pedigree structure. The combined statistics are calculated using the two statistics; A permutation procedure is applied for assessing the p-value with adjustment from the Bonferroni for the multiple markers. We use simulation studies to evaluate the type I error rates and the powers of the proposed approach. Our results show that the combined test using both case-control information and phenotype covariates not only has the correct type I error rates but also is more powerful than the other existing methods. For multiple marker interactions, our proposed method is also very powerful. Selective genotyping is an economical strategy in detecting and mapping quantitative trait loci in the genetic dissection of complex disease. When the samples arise from different ethnic groups or an admixture population, all the existing selective genotyping methods may result in spurious association due to different ancestry distributions. The problem can be more serious when the sample size is large, a general requirement to obtain sufficient power to detect modest genetic effects for most complex traits. In part II, I describe a useful strategy in selective genotyping while population stratification is present. Our procedure used a principal component based approach to eliminate any effect of population stratification. The paper evaluates the performance of our procedure using both simulated data from an early study data sets and also the HapMap data sets in a variety of population admixture models generated from empirical data. There are one binary trait and two continuous traits in the rheumatoid arthritis dataset of Problem 1 in the Genetic Analysis Workshop 16 (GAW16): RA status, AntiCCP and IgM. To allow multiple traits, we suggest a set of SNP-level F statistics by the concept of multiple-correlation to measure the genetic association between multiple trait values and SNP-specific genotypic scores and obtain their null distributions. Hereby, we perform 6 genome-wide association analyses using the novel one- and two-stage approaches which are based on single, double and triple traits. Incorporating all these 6 analyses, we successfully validate the SNPs which have been identified to be responsible for rheumatoid arthritis in the literature and detect more disease susceptibility SNPs for follow-up studies in the future. Except for chromosome 13 and 18, each of the others is found to harbour susceptible genetic regions for rheumatoid arthritis or related diseases, i.e., lupus erythematosus. This topic is discussed in part III.
Resumo:
OBJECTIVE: EORTC trial 30891 compared immediate versus deferred androgen-deprivation therapy (ADT) in T0-4 N0-2 M0 prostate cancer (PCa). Many patients randomly assigned to deferred ADT did not require ADT because they died before becoming symptomatic. The question arises whether serum prostate-specific antigen (PSA) levels may be used to decide when to initiate ADT in PCa not suitable for local curative treatment. METHODS: PSA data at baseline, PSA doubling time (PSADT) in patients receiving no ADT, and time to PSA relapse (>2 ng/ml) in patients whose PSA declined to <2 ng/ml within the first year after immediate ADT were analyzed in 939 eligible patients randomly assigned to immediate (n=468) or deferred ADT (n=471). RESULTS: In both arms, patients with a baseline PSA>50 ng/ml were at a>3.5-fold higher risk to die of PCa than patients with a baseline PSA
Resumo:
In many deposits of silver ores the grade of the ore decreases considerably a few hundred feet below the surface. It is believed that in many cases the better ores owe their richness in part to the process of sulphide enrichment. It is recognized, however, that many rich silver ores are hypogene deposits that have been affected very little, if any, by processes of enrichment.
Resumo:
File system security is fundamental to the security of UNIX and Linux systems since in these systems almost everything is in the form of a file. To protect the system files and other sensitive user files from unauthorized accesses, certain security schemes are chosen and used by different organizations in their computer systems. A file system security model provides a formal description of a protection system. Each security model is associated with specified security policies which focus on one or more of the security principles: confidentiality, integrity and availability. The security policy is not only about “who” can access an object, but also about “how” a subject can access an object. To enforce the security policies, each access request is checked against the specified policies to decide whether it is allowed or rejected. The current protection schemes in UNIX/Linux systems focus on the access control. Besides the basic access control scheme of the system itself, which includes permission bits, setuid and seteuid mechanism and the root, there are other protection models, such as Capabilities, Domain Type Enforcement (DTE) and Role-Based Access Control (RBAC), supported and used in certain organizations. These models protect the confidentiality of the data directly. The integrity of the data is protected indirectly by only allowing trusted users to operate on the objects. The access control decisions of these models depend on either the identity of the user or the attributes of the process the user can execute, and the attributes of the objects. Adoption of these sophisticated models has been slow; this is likely due to the enormous complexity of specifying controls over a large file system and the need for system administrators to learn a new paradigm for file protection. We propose a new security model: file system firewall. It is an adoption of the familiar network firewall protection model, used to control the data that flows between networked computers, toward file system protection. This model can support decisions of access control based on any system generated attributes about the access requests, e.g., time of day. The access control decisions are not on one entity, such as the account in traditional discretionary access control or the domain name in DTE. In file system firewall, the access decisions are made upon situations on multiple entities. A situation is programmable with predicates on the attributes of subject, object and the system. File system firewall specifies the appropriate actions on these situations. We implemented the prototype of file system firewall on SUSE Linux. Preliminary results of performance tests on the prototype indicate that the runtime overhead is acceptable. We compared file system firewall with TE in SELinux to show that firewall model can accommodate many other access control models. Finally, we show the ease of use of firewall model. When firewall system is restricted to specified part of the system, all the other resources are not affected. This enables a relatively smooth adoption. This fact and that it is a familiar model to system administrators will facilitate adoption and correct use. The user study we conducted on traditional UNIX access control, SELinux and file system firewall confirmed that. The beginner users found it easier to use and faster to learn then traditional UNIX access control scheme and SELinux.
Resumo:
BACKGROUND: Bleeding is a frequent complication during surgery. The intraoperative administration of blood products, including packed red blood cells, platelets and fresh frozen plasma (FFP), is often live saving. Complications of blood transfusions contribute considerably to perioperative costs and blood product resources are limited. Consequently, strategies to optimize the decision to transfuse are needed. Bleeding during surgery is a dynamic process and may result in major blood loss and coagulopathy due to dilution and consumption. The indication for transfusion should be based on reliable coagulation studies. While hemoglobin levels and platelet counts are available within 15 minutes, standard coagulation studies require one hour. Therefore, the decision to administer FFP has to be made in the absence of any data. Point of care testing of prothrombin time ensures that one major parameter of coagulation is available in the operation theatre within minutes. It is fast, easy to perform, inexpensive and may enable physicians to rationally determine the need for FFP. METHODS/DESIGN: The objective of the POC-OP trial is to determine the effectiveness of point of care prothrombin time testing to reduce the administration of FFP. It is a patient and assessor blind, single center randomized controlled parallel group trial in 220 patients aged between 18 and 90 years undergoing major surgery (any type, except cardiac surgery and liver transplantation) with an estimated blood loss during surgery exceeding 20% of the calculated total blood volume or a requirement of FFP according to the judgment of the physicians in charge. Patients are randomized to usual care plus point of care prothrombin time testing or usual care alone without point of care testing. The primary outcome is the relative risk to receive any FFP perioperatively. The inclusion of 110 patients per group will yield more than 80% power to detect a clinically relevant relative risk of 0.60 to receive FFP of the experimental as compared with the control group. DISCUSSION: Point of care prothrombin time testing in the operation theatre may reduce the administration of FFP considerably, which in turn may decrease costs and complications usually associated with the administration of blood products. TRIAL REGISTRATION: NCT00656396.
Resumo:
Engineers are confronted with the energy demand of active medical implants in patients with increasing life expectancy. Scavenging energy from the patient’s body is envisioned as an alternative to conventional power sources. Joining in this effort towards human-powered implants, we propose an innovative concept that combines the deformation of an artery resulting from the arterial pressure pulse with a transduction mechanism based on magneto-hydrodynamics. To overcome certain limitations of a preliminary analytical study on this topic, we demonstrate here a more accurate model of our generator by implementing a three-dimensional multiphysics finite element method (FEM) simulation combining solid mechanics, fluid mechanics, electric and magnetic fields as well as the corresponding couplings. This simulation is used to optimize the generator with respect to several design parameters. A first validation is obtained by comparing the results of the FEM simulation with those of the analytical approach adopted in our previous study. With an expected overall conversion efficiency of 20% and an average output power of 30 μW, our generator outperforms previous devices based on arterial wall deformation by more than two orders of magnitude. Most importantly, our generator provides sufficient power to supply a cardiac pacemaker.
Resumo:
Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^
Resumo:
This study critically analyzes and synthesizes community participation (CP) theory across disciplines, defining and beginning to map out the elements of CP according to a preliminary framework of structure, process, intermediate outcomes, and ultimate outcomes. The first study component sought to determine the impact of Sight N' Soul, a CP project utilizing neighborhood health workers (NHWs), on appointment missing in an indigent urban African-American population. It found that persons entering the vision care system through contact with an NEW were about a third less likely to miss an appointment than those persons entering the system through some other avenue. While theory in this area remains too poorly developed to hypothesize causal relationships between structure, process, and outcomes, a summary of the elements of Sight N' Soul's structure and process both developed the preliminary framework and serves as a first step to mapping these relationships. The second component of the study uncovered the elements of structure and process that may contribute to a sustained egalitarian partnership between community people and professionals, a CP program called Project HEAL. Elements of Project HEAL's structure and process included a shared belief in the program; spirituality; contribution, ownership, and reciprocation; a feeling of family; making it together; honesty, trust, and openness about conflict; the inevitability of uncertainty and change; and the guiding interactional principles of respect; love, care, and compassion; and personal responsibility. The third component analyzed the existing literature, identifying and addressing gaps and inconsistencies and highlighting areas needing more highly developed ethical analysis. Focal issues include the political, economic, and historical context of CP; the power of naming; the issue of purpose; the nature of community; the power to muster and allocate resources; and the need to move to a systems view of health and well-being, expanding our understanding of the universe of potential outcomes of CP, including iatrogenic outcomes. Intermediate outcomes might include change in community, program, and individual capacity, as well as improved health care delivery. Ultimate outcomes include increased positive interdependencies and opportunities for contribution; improved mental, physical, and spiritual health; increased social justice; and decreased exploitation. ^
Resumo:
Introduction: Brands play an essential role in the organizational structure of snowboarding by sponsoring athletes, arranging events, contributing to product development and developing long-term partnerships with other key actors. However, the specialities of their role in scene sports, such as creating identities, networking and brand marketing strategies, have not been extensively researched. This study aims to provide an analysis of the function of brands within the snowboarding subculture by comparing how the sport is organized in Switzerland and New Zealand. Sociological theories of subcultures (Hitzler & Niederbacher, 2010) and social networks (Stegbauer, 2008) are used to defi ne the structures of the sport, whereas marketing and branding theories (Adjouri & Stastny, 2006) help to understand the role of the brands. Snowboarding will be defi ned as an alternative sports subculture based on characteristics such as aesthetics, adventure and new resources of performance (Schwier, 2006). Such a defi nition also begs for a novel form of analyzing its organization. Unlike more conventional structures, the organization of snowboarding allows a variety of actors to get involved in leading the sport. By portraying and encouraging differentiated identities and lifestyles, athletes provide a space for other actors to fi nd their place within the sport (Wheaton, 2005). According to Stegbauers network theory, individual actors are able to obtain high positions and defi ne their identity depending on their ties to actors and networks within the subculture (Stegbauer, 2008). For example, social capital, contacts within the sport and insider knowledge on subculture-related information enable actors to get closer to the core (Hitzler & Niederbacher, 2010). Actors who do not have close networks and allies within the subculture are less likely to engage successfully in the culture, whether as an individual or as a commercial actor (Thorpe, 2011). This study focuses on the organizational structure of snowboarding by comparing the development of the sport in Switzerland and New Zealand. An analysis of snowboarding in two nations with diverse cultures and economic systems allows a further defi nition of the structural organization of the sport and explains how brands play an important role in the sport. Methods: The structural organization of the sport will be analyzed through an ethnographic approach, using participant observation at various leading events in Switzerland (Freestyle.ch, European Open) and New Zealand (World Heli Challenge, New Zealand Open, New Zealand Winter Games). The data is analyzed using grounded theory (Glaser & Strauss 1967) and gives an overview of the actors that are playing an important role in the local development of snowboarding. Participant observation was also used as a tool to get inside the sport culture and opened up the possibility to make over 40 semi-structured qualitative expert interviews with international core actors from 11 countries. Obtaining access to one actor as a partner early on helped to get inside the local sport culture. The ‘snowball effect’ allowed the researcher to acquire access, build trust and conduct interviews with experts within the core scene. All the interviewed actors have a direct infl uence on the sport in one or both countries, which permit a cross-analysis. The data of the interviews was evaluated through content analysis (Mayring 2010). The two methods together provided suffi cient data to analyze the organizational structure and discuss the role of brand marketing within snowboarding. Results: An actors mapping by means of a center-periphery framework has identifi ed fi ve main core groups: athletes, media representatives, brand-marketing managers, resort managers and event organizers. In both countries the same grouping of actors were found. Despite possessing different and frequently multiple roles and responsibilities, core actors appear to have a strong common identifi cation as ‘snowboarders’, are considered to be part of the organizational elite of the sport and tend to advocate similar goals. The author has found that brands in Switzerland tend to have a larger impact on the broader snowboarding culture due to a number of factors discussed below. Due to a larger amount of snowboarders and stronger economic power in Europe, snowboarders are making attempts to differentiate themselves from other winter sports, while competing with each other to develop niche markets. In New Zealand, on the other hand, the smaller market enables more cooperation and mutual respect within snowboarders. Further they are more closely linked to other winter sports and are satisfi ed with being lumped together. In both countries, brands have taken up the role of supporting young athletes, organizing competitions and feeding media with subculture-related content. Brands build their image and identity through the collaboration with particular athletes who can represent the values of the brand. Local and global communities with similar lifestyles and interests are being built around brands that share a common vision of the sport. The dominance of brands in snowboarding has enabled them with the power to organize and rule the sport through its fan base and supporters. Brands were defi ned by interviewees as independent institutions led by insiders who know the codes and symbols of the sport and were given trust and credibility. The brands identify themselves as the engines of the sport by providing the equipment, opportunities for athletes to get exposure, allowing media to get exclusive information on activities, events and sport-related stories. Differences between the two countries are more related to the economic system. While Switzerland is well integrated in the broader European market, New Zealand’s geographical isolation and close proximity to Australia tends to limit its market. Further, due to different cultural lifestyles, access to resorts and seasonal restrictions, to name a few, the amount of people practicing winter sports in New Zealand is much smaller than in Switzerland. However, this also presents numerous advantages. For example, the short southern hemisphere winter season in New Zealand enables them to attract international sports athletes, brands and representatives in a period when Europe and North America is in summer. Further, the unique snow conditions in New Zealand and majestic landscape is popular for attracting world renowned photo- and cinematographers. Another advantage is the less populated network as it provides the opportunity for individuals to gain easier access to the core of the sport, obtain diverse positions and form a unique identity and market. In Switzerland, on the other hand, the snowboarding network is dense with few positions available for the taking. Homegrown brands with core recognition are found in both countries. It was found that the Swiss brands tend to have a larger impact on the market, whereas in New Zealand, the sport is more dependent on import products by foreign brands. Further, athletes, events and resorts in New Zealand are often dependent on large brand sponsorships from abroad such as from brand headquarters in the Unites States. Thus, due to its location in the centre of Europe, Swiss brands can take advantage of brands which are closer in proximity and culture to sponsor athletes and events. In terms of media coverage, winter sports in New Zealand tend to have a minor coverage and tradition in local mass media, which leads to less exposure, recognition and investment into the sport. This is also related to how snowboarding is more integrated into other winter sports in New Zealand. Another difference is the accessibility of the ski resort by the population. While in Switzerland the resorts are mostly being visited by day-travelers, ‘weekend warriors’ and holiday makers, the location of the resorts in New Zealand make it diffi cult to visit for one day. This is in part due to the fact that Swiss ski resorts and villages are usually the same location and are accessible through public transportation, while the ski resorts in New Zealand have been built separately from the villages. Further, the villages have not been built to accommodate to high tourist arrivals. Thus, accommodation and food facilities are limited and there is a lack of public transportation to the resorts. Discussion: The fi ndings show that networks and social relations combined with specifi c knowledge on scene-related attributes are crucial in obtaining opportunities within the sport. Partnerships as well as competition between these different actors are necessary for core acceptance, peer credibility and successful commercial interests. Brands need to maintain effective marketing strategies and identities which incorporate subcultural forms of behavior and communication. In order to sustain credibility from its fans, athletes and other snowboarding actors, brands need to maintain their insider status through social networks and commercial branding strategies. The interaction between all actors is a reciprocated process, where social capital, networks and identities are being shared. While the overall structure of snowboard subcultures in Europe and New Zealand are similar, there are some distinct characteristics which make each one unique. References Adjouri, N. & Stastny, P. (2006). Sport-Branding: Mit Sport-Sponsoring zum Markenerfolg. Wiesbaden: Gabler. Glaser, B. & Strauss, K. (1967). The discovery of grounded theory: Strategies for qualitative research. Chicago: Aldine. Hebdige, D. (2009). Subculture; The meaning of style. New York: Routledge. Hitzler, R. & Niederbacher, A. (2010). Leben in Szenen: Formen juveniler Vergemeinschaftung heute. Wiesbaden: Verlag für Sozialwissenschaften. Mayring, P. (2010). Qualitative Inhaltsanalyse: Grundlagen und Techniken. Weinheim: Beltz. Schwier, J. (2006). Repräsentationen des Trendsports. Jugendliche Bewegungskulturen, Medien und Marketing. In: Gugutzer, R. (Hrsg.). body turn. Perspektiven der Soziologie des Körpers und des Sports. Bielefeld: transcript (S. 321-340). Stegbauer, C. (2008). Netzwerkanalyse und Netzwerktheorie. Ein neues Paradigma in den Sozialwissenschaften. Wiesbaden: VS Verlag für Sozialwissenschaften. Thorpe, H. (2011). Snowboarding bodies in theory and practice. Basingstoke: Palgrave Macmillan. Wheaton, B. (2005). Understanding lifestyle sports; consumption, identity and difference. New York: Routledge.
Resumo:
The current study was designed to test for the effect of lateralized attention on prospective memory performance in a dichotic listening task. The practice phase of the experiment consisted of a semantic decision task during which the participants were presented with different words on either side via headphones. Depending on the experimental condition the participants were required to focus on the words presented on the left or right side and to decide if these words were abstract or concrete. Thereafter, the participants were informed about the prospective memory task. They were instructed to press a distinct key whenever they hear a word which denotes an animal in the same task later during the experiment. The participants were explicitly informed that the prospective memory cues could appear on either side of the headphones. This was followed by a retention interval which was filled with unrelated tasks. Next, the participants performed the prospective memory task. The results revealed more prospective hits for the attended side. The finding suggests that noticing a prospective memory cue is not an automatic process but requires attention.
Resumo:
Information systems (IS) outsourcing projects often fail to achieve initial goals. To avoid project failure, managers need to design formal controls that meet the specific contextual demands of the project. However, the dynamic and uncertain nature of IS outsourcing projects makes it difficult to design such specific formal controls at the outset of a project. It is hence crucial to translate high-level project goals into specific formal controls during the course of a project. This study seeks to understand the underlying patterns of such translation processes. Based on a comparative case study of four outsourced software development projects, we inductively develop a process model that consists of three unique patterns. The process model shows that the performance implications of emergent controls with higher specificity depend on differences in the translation process. Specific formal controls have positive implications for goal achievement if only the stakeholder context is adapted, while they are negative for goal achievement if in the translation process tasks are unintendedly adapted. In the latter case projects incrementally drift away from their initial direction. Our findings help to better understand control dynamics in IS outsourcing projects. We contribute to a process theoretic understanding of IS outsourcing governance and we derive implications for control theory and the IS project escalation literature.
Resumo:
Decision strategies aim at enabling reasonable decisions in cases of uncertain policy decision problems which do not meet the conditions for applying standard decision theory. This paper focuses on decision strategies that account for uncertainties by deciding whether a proposed list of policy options should be accepted or revised (scope strategies) and whether to decide now or later (timing strategies). They can be used in participatory approaches to structure the decision process. As a basis, we propose to classify the broad range of uncertainties affecting policy decision problems along two dimensions, source of uncertainty (incomplete information, inherent indeterminacy and unreliable information) and location of uncertainty (information about policy options, outcomes and values). Decision strategies encompass multiple and vague criteria to be deliberated in application. As an example, we discuss which decision strategies may account for the uncertainties related to nutritive technologies that aim at reducing methane (CH4) emissions from ruminants as a means of mitigating climate change, limiting our discussion to published scientific information. These considerations not only speak in favour of revising rather than accepting the discussed list of options, but also in favour of active postponement or semi-closure of decision-making rather than closure or passive postponement.
Resumo:
There are too many conflicting uses of the ocean in a time where resources are rapidly dwindling. Marine Spatial Planning is catching on globally, and may soon come to Long Island Sound, but it may be difficult to decide who gets to do what, where.
Resumo:
The tobacco-specific nitrosamine 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK) is an obvious carcinogen for lung cancer. Since CBMN (Cytokinesis-blocked micronucleus) has been found to be extremely sensitive to NNK-induced genetic damage, it is a potential important factor to predict the lung cancer risk. However, the association between lung cancer and NNK-induced genetic damage measured by CBMN assay has not been rigorously examined. ^ This research develops a methodology to model the chromosomal changes under NNK-induced genetic damage in a logistic regression framework in order to predict the occurrence of lung cancer. Since these chromosomal changes were usually not observed very long due to laboratory cost and time, a resampling technique was applied to generate the Markov chain of the normal and the damaged cell for each individual. A joint likelihood between the resampled Markov chains and the logistic regression model including transition probabilities of this chain as covariates was established. The Maximum likelihood estimation was applied to carry on the statistical test for comparison. The ability of this approach to increase discriminating power to predict lung cancer was compared to a baseline "non-genetic" model. ^ Our method offered an option to understand the association between the dynamic cell information and lung cancer. Our study indicated the extent of DNA damage/non-damage using the CBMN assay provides critical information that impacts public health studies of lung cancer risk. This novel statistical method could simultaneously estimate the process of DNA damage/non-damage and its relationship with lung cancer for each individual.^