929 resultados para Temporary pools
Resumo:
Brominated flame retardants, including hexabromocyclododecane (HBCD) and polybrominated diphenyl ethers (PBDEs) are used to reduce the flammability of a multitude of electrical and electronic products, textiles and foams. The use of selected PBDEs has ceased, however, use of decaBDE and HBCD continues. While elevated concentrations of PBDEs in humans have been observed in Australia, no data is available on other BFRs such as HBCD. This study aimed to provide background HBCD concentrations from a representative sample of the Australian population and to assess temporal trends of HBCD and compare with PBDE concentrations over a 16 year period. Samples of human milk collected in Australia from 1993 to 2009, primarily from primiparae mothers were combined into 12 pools from 1993 (2 pools); 2001; 2002/2003 (4 pools); 2003/2004; 2006; 2007/2008 (2 pools); and 2009. Concentrations of ∑HBCD ranged from not quantified (nq) to 19 ng g−1 lipid while α-HBCD and γ-HBCD ranged from nq to 10 ng g−1 lipid and nq to 9.2 ng g−1 lipid. β-HBCD was detected in only one sample at 3.6 ng g−1 lipid while ∑4PBDE ranged from 2.5 to 15.8 ng g−1 lipid. No temporal trend was apparent in HBCD concentrations in human milk collected in Australia from 1993 to 2009. In comparison, PBDE concentrations in human milk show a peak around 2002/03 (mean ∑4PBDEs = 9.6 ng g−1 lipid) and 2003/04 (12.4 ng g−1 lipid) followed by a decrease in 2007/08 (2.7 ng g−1 lipid) and 2009 (2.6 ng g−1 lipid). In human blood serum samples collected from the Australian population, PBDE concentrations did not vary greatly (p = 0.441) from 2002/03 to 2008/09. Continued monitoring including both human milk and serum for HBCD and PBDEs is required to observe trends in human body burden of HBCD and PBDEs body burden following changes to usage.
Resumo:
After some years of remarkable growth, the scholarly field of Project Management (PM) research currently finds itself in a crucial stage of development. In this editorial, we make an analysis of submissions to PM's premier specialty journal, the International Journal of Project Management over the period 2007–2010, and argue that one of the most important ways in which PM research can further evolve is to pay more attention to the mundane, yet important, act of good reviewing — an activity that we believe has received relatively little attention in the PM community thus far. Let us begin by considering the crucial juncture that, as a scholarly discipline, PM is currently at. On the one hand, the PM research field is characterized by signs of major progress. For one, there has been a strong growth in terms of published output: recent years have seen the publication of three major edited volumes with a central focus on PM, published by top-tier publishers (Cattani et al., 2011, Kenis et al., 2009 and Morris et al., 2011); the PM/temporary organizations literature published in ISI ranked peer-reviewed articles is growing exponentially (Bakker, 2010); and besides some of the long-standing PM specialty journals, the field has recently seen the rise of a number of new journals, including the International Journal of Managing Projects in Business, the International Journal of Project Organisation and Management, and the Journal of Project, Program, and Portfolio Management.
Resumo:
Despite the compelling case for moving towards cloud computing, the upstream oil & gas industry faces several technical challenges—most notably, a pronounced emphasis on data security, a reliance on extremely large data sets, and significant legacy investments in information technology (IT) infrastructure—that make a full migration to the public cloud difficult at present. Private and hybrid cloud solutions have consequently emerged within the industry to yield as much benefit from cloud-based technologies as possible while working within these constraints. This paper argues, however, that the move to private and hybrid clouds will very likely prove only to be a temporary stepping stone in the industry’s technological evolution. By presenting evidence from other market sectors that have faced similar challenges in their journey to the cloud, we propose that enabling technologies and conditions will probably fall into place in a way that makes the public cloud a far more attractive option for the upstream oil & gas industry in the years ahead. The paper concludes with a discussion about the implications of this projected shift towards the public cloud, and calls for more of the industry’s services to be offered through cloud-based “apps.”
Resumo:
Key distribution is one of the most challenging security issues in wireless sensor networks where sensor nodes are randomly scattered over a hostile territory. In such a sensor deployment scenario, there will be no prior knowledge of post deployment configuration. For security solutions requiring pairwise keys, it is impossible to decide how to distribute key pairs to sensor nodes before the deployment. Existing approaches to this problem are to assign more than one key, namely a key-chain, to each node. Key-chains are randomly drawn from a key-pool. Either two neighboring nodes have a key in common in their key-chains, or there is a path, called key-path, among these two nodes where each pair of neighboring nodes on this path has a key in common. Problem in such a solution is to decide on the key-chain size and key-pool size so that every pair of nodes can establish a session key directly or through a path with high probability. The size of the key-path is the key factor for the efficiency of the design. This paper presents novel, deterministic and hybrid approaches based on Combinatorial Design for key distribution. In particular, several block design techniques are considered for generating the key-chains and the key-pools.
Resumo:
Despite the compelling case for moving towards cloud computing, the upstream oil & gas industry faces several technical challenges—most notably, a pronounced emphasis on data security, a reliance on extremely large data sets, and significant legacy investments in information technology infrastructure—that make a full migration to the public cloud difficult at present. Private and hybrid cloud solutions have consequently emerged within the industry to yield as much benefit from cloud-based technologies as possible while working within these constraints. This paper argues, however, that the move to private and hybrid clouds will very likely prove only to be a temporary stepping stone in the industry's technological evolution. By presenting evidence from other market sectors that have faced similar challenges in their journey to the cloud, we propose that enabling technologies and conditions will probably fall into place in a way that makes the public cloud a far more attractive option for the upstream oil & gas industry in the years ahead. The paper concludes with a discussion about the implications of this projected shift towards the public cloud, and calls for more of the industry's services to be offered through cloud-based “apps.”
Resumo:
Population-representative data for dioxin and PCB congener concentrations are available for the Australian population based on measurements in age- and gender-specific serum pools.1 Such data provide a basis for characterizing the mean concentrations of these compounds in the population, but do not provide information on the inter-individual variation in serum concentrations that may exist in the population within an age- and gender-specific group. Such variation may occur due to inter-individual differences in long-term exposure levels or elimination rates. Reference values are estimates of upper percentiles (often the 95th percentile) of measured values in a defined population that can be used to evaluate data from individuals in the population in order to identify concentrations that are elevated, for example, from occupational exposures.2 The objective of this analysis is to estimate reference values corresponding to the 95th percentile (RV95s) for Australia on an age-specific basis for individual dioxin-like congeners based on measurements in serum pools from Toms and Mueller (2010).
Resumo:
Key distribution is one of the most challenging security issues in wireless sensor networks where sensor nodes are randomly scattered over a hostile territory. In such a sensor deployment scenario, there will be no prior knowledge of post deployment configuration. For security solutions requiring pair wise keys, it is impossible to decide how to distribute key pairs to sensor nodes before the deployment. Existing approaches to this problem are to assign more than one key, namely a key-chain, to each node. Key-chains are randomly drawn from a key-pool. Either two neighbouring nodes have a key in common in their key-chains, or there is a path, called key-path, among these two nodes where each pair of neighbouring nodes on this path has a key in common. Problem in such a solution is to decide on the key-chain size and key-pool size so that every pair of nodes can establish a session key directly or through a path with high probability. The size of the key-path is the key factor for the efficiency of the design. This paper presents novel, deterministic and hybrid approaches based on Combinatorial Design for key distribution. In particular, several block design techniques are considered for generating the key-chains and the key-pools. Comparison to probabilistic schemes shows that our combinatorial approach produces better connectivity with smaller key-chain sizes.
Resumo:
Aim. To develop and evaluate the implementation of a communication board for paramedics to use with patients as an augmentative or alternative communication tool to address communication needs of patients in the pre-hospital setting. Method. A double-sided A4-size communication board was designed specifically for use in the pre-hospital setting by the Queensland Ambulance Service and Disability and Community Care Services. One side of the board contains expressive messages that could be used by both the patient and paramedic. The other side contains messages to support patients’ understanding and interaction tips for the paramedic. The communication board was made available in every ambulance and patient transport vehicle in the Brisbane Region. Results. A total of 878 paramedics completed a survey that gauged which patient groups they might use the communication board with. The two most common groups were patients from culturally and linguistically diverse backgrounds and children. Staff reported feeling confident in using the board, and 72% of interviewed paramedics agreed that the communication board was useful for aiding communication with patients. Feedback from paramedics suggests that the board is simple to use, reduces patient frustration and improves communication. Conclusion. These results suggest that a communication board can be applied in the pre-hospital setting to support communication success with patients. What is known about the topic? It is imperative that communication between patient and paramedic is clear and effective. Research has shown that communication boards have been effective with people with temporary or permanent communication difficulties. What does this paper add? This is the first paper outlining the development and use of a communication board by paramedics in the pre-hospital setting in Australia. The paper details the design of the communication board for the unique pre-hospital environment. The paper provides some preliminary data on the use of the communication board with certain patient groups and its effectiveness as an alternative communication tool. What are the implications for practitioners? The findings support the use of the tool as a viable option in supporting the communication between paramedics and a range of patients. It is not suggested that this communication board will meet the complete communication needs of any individual in this environment, but it is hoped that the board’s presence within the Queensland Ambulance Service may result in paramedics introducing the board on occasions where communication with a patient is challenging.
Resumo:
Top lists of and praise for the economy's fastest growing firms abound in business media around the world. Similarly, in academic research there has been a tendency to equate firm growth with business success. This tendency appears to be particularly pronounced in-but not confined to entrepreneurship research. In this study we critically examine this tendency to portray firm growth as more or less universally favorable. While several theories suggest that growth drives profitability we first show that the available empirical evidence does not support the existence of a general, positive relation ship between growth and profitability. Using the theoretical lens of the Resource-Based View (RBV) we then argue that sound growth usually starts with achieving sufficient levels of profitability. In summary, our theoretical argument is as follows: In a population of SMEs, superior profitability is likely to be indicative of having built a resource-based competitive advantage. Building such a valuable and hard to-copy advantage may at first constrain growth. However, the underlying advantage itself and the financial resources generated through high profitability make it possible for firms in this situation to now achieve sound and sustainable growth - which may require building a series of temporary advantages- without having to sacrifice profitability. By contrast, when firms strive for high growth starting from low profitability, the latter often indicates lack of competitive advantage. Therefore growth must be achieved in head-to-head competition with equally attractive alternatives, leading to profitability deterioration rather than improvement. In addition, these low profitability firms are unlikely to be able to finance strategies toward building valuable and difficult-to-imitate advantages while growing.
Resumo:
Genetically distinct checkpoints, activated as a consequence of either DNA replication arrest or ionizing radiation-induced DNA damage, integrate DNA repair responses into the cell cycle programme. The ataxia-telangiectasia mutated (ATM) protein kinase blocks cell cycle progression in response to DNA double strand breaks, whereas the related ATR is important in maintaining the integrity of the DNA replication apparatus. Here, we show that thymidine, which slows the progression of replication forks by depleting cellular pools of dCTP, induces a novel DNA damage response that, uniquely, depends on both ATM and ATR. Thymidine induces ATM-mediated phosphorylation of Chk2 and NBS1 and an ATM-independent phosphorylation of Chk1 and SMC1. AT cells exposed to thymidine showed decreased viability and failed to induce homologous recombination repair (HRR). Taken together, our results implicate ATM in the HRR-mediated rescue of replication forks impaired by thymidine treatment.
Resumo:
Background: Procedural sedation and analgesia (PSA) administered by nurses in the cardiac catheterisation laboratory (CCL) is unlikely to yield serious complications. However, the safety of this practice is dependent on timely identification and treatment of depressed respiratory function. Aim: Describe respiratory monitoring in the CCL. Methods: Retrospective medical record audit of adult patients who underwent a procedure in the CCLs of one private hospital in Brisbane during May and June 2010. An electronic database was used to identify subjects and an audit tool ensured data collection was standardised. Results: Nurses administered PSA during 172/473 (37%) procedures including coronary angiographies, percutaneous coronary interventions, electrophysiology studies, radiofrequency ablations, cardiac pacemakers, implantable cardioverter defibrillators, temporary pacing leads and peripheral vascular interventions. Oxygen saturations were recorded during 160/172 (23%) procedures, respiration rate was recorded during 17/172 (10%) procedures, use of oxygen supplementation was recorded during 40/172 (23%) procedures and 13/172 (7.5%; 95% CI=3.59–11.41%) patients experienced oxygen desaturation. Conclusion: Although oxygen saturation was routinely documented, nurses did not regularly record respiration observations. It is likely that surgical draping and the requirement to minimise radiation exposure interfered with nurses’ ability to observe respiration. Capnography could overcome these barriers to respiration assessment as its accurate measurement of exhaled carbon dioxide coupled with the easily interpretable waveform output it produces, which displays a breath-by-breath account of ventilation, enables identification of respiratory depression in real-time. Results of this audit emphasise the need to ascertain the clinical benefits associated with using capnography to assess ventilation during PSA in the CCL.
Resumo:
Cardiovascular diseases are a leading cause of death throughout the developed world. With the demand for donor hearts far exceeding the supply, a bridge-to-transplant or permanent solution is required. This is currently achieved with ventricular assist devices (VADs), which can be used to assist the left ventricle (LVAD), right ventricle (RVAD), or both ventricles simultaneously (BiVAD). Earlier generation VADs were large, volume-displacement devices designed for temporary support until a donor heart was found. The latest generation of VADs use rotary blood pump technology which improves device lifetime and the quality of life for end stage heart failure patients. VADs are connected to the heart and greater vessels of the patient through specially designed tubes called cannulae. The inflow cannulae, which supply blood to the VAD, are usually attached to the left atrium or ventricle for LVAD support, and the right atrium or ventricle for RVAD support. Few studies have characterized the haemodynamic difference between the two cannulation sites, particularly with respect to rotary RVAD support. Inflow cannulae are usually made of metal or a semi-rigid polymer to prevent collapse with negative pressures. However suction, and subsequent collapse, of the cannulated heart chamber can be a frequent occurrence, particularly with the relatively preload insensitive rotary blood pumps. Suction events may be associated with endocardial damage, pump flow stoppages and ventricular arrhythmias. While several VAD control strategies are under development, these usually rely on potentially inaccurate sensors or somewhat unreliable inferred data to estimate preload. Fixation of the inflow cannula is usually achieved through suturing the cannula, often via a felt sewing ring, to the cannulated chamber. This technique extends the time on cardiopulmonary bypass which is associated with several postoperative complications. The overall objective of this thesis was to improve the placement and design of rotary LVAD and RVAD inflow cannulae to achieve enhanced haemodynamic performance, reduced incidence of suction events, reduced levels of postoperative bleeding and a faster implantation procedure. Specific objectives were: * in-vitro evaluation of LVAD and RVAD inflow cannula placement, * design and in-vitro evaluation of a passive mechanism to reduce the potential for heart chamber suction, * design and in-vitro evaluation of a novel suture-less cannula fixation device. In order to complete in-vitro evaluation of VAD inflow cannulae, a mock circulation loop (MCL) was developed to accurately replicate the haemodynamics in the human systemic and pulmonary circulations. Validation of the MCL’s haemodynamic performance, including the form and magnitude of pressure, flow and volume traces was completed through comparisons of patient data and the literature. The MCL was capable of reproducing almost any healthy or pathological condition, and provided a useful tool to evaluate VAD cannulation and other cardiovascular devices. The MCL was used to evaluate inflow cannula placement for rotary VAD support. Left and right atrial and ventricular cannulation sites were evaluated under conditions of mild and severe heart failure. With a view to long term LVAD support in the severe left heart failure condition, left ventricular inflow cannulation was preferred due to improved LVAD efficiency and reduced potential for thrombus formation. In the mild left heart failure condition, left atrial cannulation was preferred to provide an improved platform for myocardial recovery. Similar trends were observed with RVAD support, however to a lesser degree due to a smaller difference in right atrial and ventricular pressures. A compliant inflow cannula to prevent suction events was then developed and evaluated in the MCL. As rotary LVAD or RVAD preload was reduced, suction events occurred in all instances with a rigid inflow cannula. Addition of the compliant segment eliminated suction events in all instances. This was due to passive restriction of the compliant segment as preload dropped, thus increasing the VAD circuit resistance and decreasing the VAD flow rate. Therefore, the compliant inflow cannula acted as a passive flow control / anti-suction system in LVAD and RVAD support. A novel suture-less inflow cannula fixation device was then developed to reduce implantation time and postoperative bleeding. The fixation device was evaluated for LVAD and RVAD support in cadaveric animal and human hearts attached to a MCL. LVAD inflow cannulation was achieved in under two minutes with the suture-less fixation device. No leakage through the suture-less fixation device – myocardial interface was noted. Continued development and in-vivo evaluation of this device may result in an improved inflow cannulation technique with the potential for off-bypass insertion. Continued development of this research, in particular the compliant inflow cannula and suture-less inflow cannulation device, will result in improved postoperative outcomes, life span and quality of life for end-stage heart failure patients.
Resumo:
In Social Science (Organization Studies, Economics, Management Science, Strategy, International Relations, Political Science…) the quest for addressing the question “what is a good practitioner?” has been around for centuries, with the underlying assumptions that good practitioners should lead organizations to higher levels of performance. Hence to ask “what is a good “captain”?” is not a new question, we should add! (e.g. Tsoukas & Cummings, 1997, p. 670; Söderlund, 2004, p. 190). This interrogation leads to consider problems such as the relations between dichotomies Theory and Practice, rigor and relevance of research, ways of knowing and knowledge forms. On the one hand we face the “Enlightenment” assumptions underlying modern positivist Social science, grounded in “unity-of-science dream of transforming and reducing all kinds of knowledge to one basic form and level” and cause-effects relationships (Eikeland, 2012, p. 20), and on the other, the postmodern interpretivist proposal, and its “tendency to make all kinds of knowing equivalent” (Eikeland, 2012, p. 20). In the project management space, this aims at addressing one of the fundamental problems in the field: projects still do not deliver their expected benefits and promises and therefore the socio-economical good (Hodgson & Cicmil, 2007; Bredillet, 2010, Lalonde et al., 2012). The Cartesian tradition supporting projects research and practice for the last 60 years (Bredillet, 2010, p. 4) has led to the lack of relevance to practice of the current conceptual base of project management, despite the sum of research, development of standards, best & good practices and the related development of project management bodies of knowledge (Packendorff, 1995, p. 319-323; Cicmil & Hodgson, 2006, p. 2–6, Hodgson & Cicmil, 2007, p. 436–7; Winter et al., 2006, p. 638). Referring to both Hodgson (2002) and Giddens (1993), we could say that “those who expect a “social-scientific Newton” to revolutionize this young field “are not only waiting for a train that will not arrive, but are in the wrong station altogether” (Hodgson, 2002, p. 809; Giddens, 1993, p. 18). While, in the postmodern stream mainly rooted in the “practice turn” (e.g. Hällgren & Lindahl, 2012), the shift from methodological individualism to social viscosity and the advocated pluralism lead to reinforce the “functional stupidity” (Alvesson & Spicer, 2012, p. 1194) this postmodern stream aims at overcoming. We suggest here that addressing the question “what is a good PM?” requires a philosophy of practice perspective to complement the “usual” philosophy of science perspective. The questioning of the modern Cartesian tradition mirrors a similar one made within Social science (Say, 1964; Koontz, 1961, 1980; Menger, 1985; Warry, 1992; Rothbard, 1997a; Tsoukas & Cummings, 1997; Flyvbjerg, 2001; Boisot & McKelvey, 2010), calling for new thinking. In order to get outside the rationalist ‘box’, Toulmin (1990, p. 11), along with Tsoukas & Cummings (1997, p. 655), suggests a possible path, summarizing the thoughts of many authors: “It can cling to the discredited research program of the purely theoretical (i.e. “modern”) philosophy, which will end up by driving it out of business: it can look for new and less exclusively theoretical ways of working, and develop the methods needed for a more practical (“post-modern”) agenda; or it can return to its pre-17th century traditions, and try to recover the lost (“pre-modern”) topics that were side-tracked by Descartes, but can be usefully taken up for the future” (Toulmin, 1990, p. 11). Thus, paradoxically and interestingly, in their quest for the so-called post-modernism, many authors build on “pre-modern” philosophies such as the Aristotelian one (e.g. MacIntyre, 1985, 2007; Tsoukas & Cummings, 1997; Flyvbjerg, 2001; Blomquist et al., 2010; Lalonde et al., 2012). It is perhaps because the post-modern stream emphasizes a dialogic process restricted to reliance on voice and textual representation, it limits the meaning of communicative praxis, and weaking the practice because it turns away attention from more fundamental issues associated with problem-definition and knowledge-for-use in action (Tedlock, 1983, p. 332–4; Schrag, 1986, p. 30, 46–7; Warry, 1992, p. 157). Eikeland suggests that the Aristotelian “gnoseology allows for reconsidering and reintegrating ways of knowing: traditional, practical, tacit, emotional, experiential, intuitive, etc., marginalised and considered insufficient by modernist [and post-modernist] thinking” (Eikeland, 2012, p. 20—21). By contrast with the modernist one-dimensional thinking and relativist and pluralistic post-modernism, we suggest, in a turn to an Aristotelian pre-modern lens, to re-conceptualise (“re” involving here a “re”-turn to pre-modern thinking) the “do” and to shift the perspective from what a good PM is (philosophy of science lens) to what a good PM does (philosophy of practice lens) (Aristotle, 1926a). As Tsoukas & Cummings put it: “In the Aristotelian tradition to call something good is to make a factual statement. To ask, for example, ’what is a good captain’?’ is not to come up with a list of attributes that good captains share (as modem contingency theorists would have it), but to point out the things that those who are recognized as good captains do.” (Tsoukas & Cummings, 1997, p. 670) Thus, this conversation offers a dialogue and deliberation about a central question: What does a good project manager do? The conversation is organized around a critic of the underlying assumptions supporting the modern, post-modern and pre-modern relations to ways of knowing, forms of knowledge and “practice”.
Resumo:
This paper takes its root in a trivial observation: management approaches are unable to provide relevant guidelines to cope with uncertainty, and trust of our modern worlds. Thus, managers are looking for reducing uncertainty through information’s supported decision-making, sustained by ex-ante rationalization. They strive to achieve best possible solution, stability, predictability, and control of “future”. Hence, they turn to a plethora of “prescriptive panaceas”, and “management fads” to bring simple solutions through best practices. However, these solutions are ineffective. They address only one part of a system (e.g. an organization) instead of the whole. They miss the interactions and interdependencies with other parts leading to “suboptimization”. Further classical cause-effects investigations and researches are not very helpful to this regard. Where do we go from there? In this conversation, we want to challenge the assumptions supporting the traditional management approaches and shed some lights on the problem of management discourse fad using the concept of maturity and maturity models in the context of temporary organizations as support for reflexion. Global economy is characterized by use and development of standards and compliance to standards as a practice is said to enable better decision-making by managers in uncertainty, control complexity, and higher performance. Amongst the plethora of standards, organizational maturity and maturity models hold a specific place due to general belief in organizational performance as dependent variable of (business) processes continuous improvement, grounded on a kind of evolutionary metaphor. Our intention is neither to offer a new “evidence based management fad” for practitioners, nor to suggest research gap to scholars. Rather, we want to open an assumption-challenging conversation with regards to main stream approaches (neo-classical economics and organization theory), turning “our eyes away from the blinding light of eternal certitude towards the refracted world of turbid finitude” (Long, 2002, p. 44) generating what Bernstein has named “Cartesian Anxiety” (Bernstein, 1983, p. 18), and revisit the conceptualization of maturity and maturity models. We rely on conventions theory and a systemic-discursive perspective. These two lenses have both information & communication and self-producing systems as common threads. Furthermore the narrative approach is well suited to explore complex way of thinking about organizational phenomena as complex systems. This approach is relevant with our object of curiosity, i.e. the concept of maturity and maturity models, as maturity models (as standards) are discourses and systems of regulations. The main contribution of this conversation is that we suggest moving from a neo-classical “theory of the game” aiming at making the complex world simpler in playing the game, to a “theory of the rules of the game”, aiming at influencing and challenging the rules of the game constitutive of maturity models – conventions, governing systems – making compatible individual calculation and social context, and possible the coordination of relationships and cooperation between agents with or potentially divergent interests and values. A second contribution is the reconceptualization of maturity as structural coupling between conventions, rather than as an independent variable leading to organizational performance.
Resumo:
Migraine is a common neurological disorder characterised by temporary disabling attacks of severe head pain and associated disturbances. There is significant evidence to suggest a genetic aetiology to the disease however few causal mutations have been conclusively linked to the migraine subtypes Migraine with (MA) or without Aura (MO). The Potassium Channel, Subfamily K, member 18 (KCNK18) gene, coding the potassium channel TRESK, is the first gene in which a rare mutation resulting in a non-functional truncated protein has been identified and causally linked to MA in a multigenerational family. In this study, three common polymorphisms in the KCNK18 gene were analysed for genetic variation in an Australian case-control migraine population consisting of 340 migraine cases and 345 controls. No association was observed for the polymorphisms examined with the migraine phenotype or with any haplotypes across the gene. Therefore even though the KCNK18 gene is the only gene to be causally linked to MA our studies indicate that common genetic variation in the gene is not a contributor to MA.