890 resultados para conducting
Resumo:
The unsteady MHD flow of an incompressible, viscous electrically conducting fluid contained between two torsionally oscillating eccentric disks has been investigated. The state of uniform rotation of the central region visualised in the steady flow is seen to be absent in the case of oscillatory flow.
Resumo:
This edition includes a diverse range of contributions that collectively illustrate two elevated concerns of critical Indigenous studies: First, an interest in establishing ways and means of conducting ethical research with Indigenous communities; and second, critically engaging with constructions of Indigeneity. The first article, by Craig Sinclair, Peter Keelan, Samuel Stokes, Annette Stokes and Christine Jefferies-Stokes, examines the increasingly popular use of participatory video (PV) as a means of engagement, in this case with children in remote Aboriginal communities as participants in health research. The authors note that, whilst not without methodological disadvantages, the PV method, with its flexibility to respond to community priorities is particularly well suited to research with remote Aboriginal communities.
Location of concentrators in a computer communication network: a stochastic automation search method
Resumo:
The following problem is considered. Given the locations of the Central Processing Unit (ar;the terminals which have to communicate with it, to determine the number and locations of the concentrators and to assign the terminals to the concentrators in such a way that the total cost is minimized. There is alao a fixed cost associated with each concentrator. There is ail upper limit to the number of terminals which can be connected to a concentrator. The terminals can be connected directly to the CPU also In this paper it is assumed that the concentrators can bo located anywhere in the area A containing the CPU and the terminals. Then this becomes a multimodal optimization problem. In the proposed algorithm a stochastic automaton is used as a search device to locate the minimum of the multimodal cost function . The proposed algorithm involves the following. The area A containing the CPU and the terminals is divided into an arbitrary number of regions (say K). An approximate value for the number of concentrators is assumed (say m). The optimum number is determined by iteration later The m concentrators can be assigned to the K regions in (mk) ways (m > K) or (km) ways (K>m).(All possible assignments are feasible, i.e. a region can contain 0,1,…, to concentrators). Each possible assignment is assumed to represent a state of the stochastic variable structure automaton. To start with, all the states are assigned equal probabilities. At each stage of the search the automaton visits a state according to the current probability distribution. At each visit the automaton selects a 'point' inside that state with uniform probability. The cost associated with that point is calculated and the average cost of that state is updated. Then the probabilities of all the states are updated. The probabilities are taken to bo inversely proportional to the average cost of the states After a certain number of searches the search probabilities become stationary and the automaton visits a particular state again and again. Then the automaton is said to have converged to that state Then by conducting a local gradient search within that state the exact locations of the concentrators are determined This algorithm was applied to a set of test problems and the results were compared with those given by Cooper's (1964, 1967) EAC algorithm and on the average it was found that the proposed algorithm performs better.
Resumo:
The National Energy Efficient Building Project (NEEBP) Phase One report, published in December 2014, investigated “process issues and systemic failures” in the administration of the energy performance requirements in the National Construction Code. It found that most stakeholders believed that under-compliance with these requirements is widespread across Australia, with similar issues being reported in all states and territories. The report found that many different factors were contributing to this outcome and, as a result, many recommendations were offered that together would be expected to remedy the systemic issues reported. To follow up on this Phase 1 report, three additional projects were commissioned as part of Phase 2 of the overall NEEBP project. This Report deals with the development and piloting of an Electronic Building Passport (EBP) tool – a project undertaken jointly by pitt&sherry and a team at the Queensland University of Technology (QUT) led by Dr Wendy Miller. The other Phase 2 projects cover audits of Class 1 buildings and issues relating to building alterations and additions. The passport concept aims to provide all stakeholders with (controlled) access to the key documentation and information that they need to verify the energy performance of buildings. This trial project deals with residential buildings but in principle could apply to any building type. Nine councils were recruited to help develop and test a pilot electronic building passport tool. The participation of these councils – across all states – enabled an assessment of the extent to which these councils are currently utilising documentation; to track the compliance of residential buildings with the energy performance requirements in the National Construction Code (NCC). Overall we found that none of the participating councils are currently compiling all of the energy performance-related documentation that would demonstrate code compliance. The key reasons for this include: a major lack of clarity on precisely what documentation should be collected; cost and budget pressures; low public/stakeholder demand for the documentation; and a pragmatic judgement that non-compliance with any regulated documentation requirements represents a relatively low risk for them. Some councils reported producing documentation, such as certificates of final completion, only on demand, for example. Only three of the nine council participants reported regularly conducting compliance assessments or audits utilising this documentation and/or inspections. Overall we formed the view that documentation and information tracking processes operating within the building standards and compliance system are not working to assure compliance with the Code’s energy performance requirements. In other words the Code, and its implementation under state and territory regulatory processes, is falling short as a ‘quality assurance’ system for consumers. As a result it is likely that the new housing stock is under-performing relative to policy expectations, consuming unnecessary amounts of energy, imposing unnecessarily high energy bills on occupants, and generating unnecessary greenhouse gas emissions. At the same time, Councils noted that the demand for documentation relating to building energy performance was low. All the participant councils in the EBP pilot agreed that documentation and information processes need to work more effectively if the potential regulatory and market drivers towards energy efficient homes are to be harnessed. These findings are fully consistent with the Phase 1 NEEBP report. It was also agreed that an EBP system could potentially play an important role in improving documentation and information processes. However, only one of the participant councils indicated that they might adopt such a system on a voluntary basis. The majority felt that such a system would only be taken up if it were: - A nationally agreed system, imposed as a mandatory requirement under state or national regulation; - Capable of being used by multiple parties including councils, private certifiers, building regulators, builders and energy assessors in particular; and - Fully integrated into their existing document management systems, or at least seamlessly compatible rather than a separate, unlinked tool. Further, we note that the value of an EBP in capturing statistical information relating to the energy performance of buildings would be much greater if an EBP were adopted on a nationally consistent basis. Councils were clear that a key impediment to the take up of an EBP system is that they are facing very considerable budget and staffing challenges. They report that they are often unable to meet all community demands from the resources available to them. Therefore they are unlikely to provide resources to support the roll out of an EBP system on a voluntary basis. Overall, we conclude from this pilot that the public good would be well served if the Australian, state and territory governments continued to develop and implement an Electronic Building Passport system in a cost-efficient and effective manner. This development should occur with detailed input from building regulators, the Australian Building Codes Board (ABCB), councils and private certifiers in the first instance. This report provides a suite of recommendations (Section 7.2) designed to advance the development and guide the implementation of a national EBP system.
Resumo:
What is a miracle and what can we know about miracles? A discussion of miracles in anglophone philosophy of religion literature since the late 1960s. The aim of this study is to systematically describe and philosophically examine the anglophone discussion on the subject of miracles since the latter half of the 1960s. The study focuses on two salient questions: firstly, what I will term the conceptual-ontological question of the extent to which we can understand miracles and, secondly, the epistemological question of what we can know about miracles. My main purpose in this study is to examine the various viewpoints that have been submitted in relation to these questions, how they have been argued and on what presuppositions these arguments have been based. In conducting the study, the most salient dimension of the various discussions was found to relate to epistemological questions. In this regard, there was a notable confrontation between those scholars who accept miracles and those who are sceptical of them. On the conceptual-ontological side I recognised several different ways of expressing the concept of miracle . I systematised the discussion by demonstrating the philosophical boundaries between these various opinions. The first and main boundary was related to ontological knowledge. On one side of this boundary I placed the views which were based on realism and objectivism. The proponents of this view assumed that miraculousness is a real property of a miraculous event regardless of how we can perceive it. On the other side I put the views which tried to define miraculousness in terms of subjectivity, contextuality and epistemicity. Another essential boundary which shed light on the conceptual-ontological discussion was drawn in relation to two main views of nature. The realistic-particularistic view regards nature as a certain part of reality. The adherents of this presupposition postulate a supernatural sphere alongside nature. Alternatively, the nominalist-universalist view understands nature without this kind of division. Nature is understood as the entire and infinite universe; the whole of reality. Other, less important boundaries which shed light on the conceptual-ontological discussion were noted in relation to views regarding the laws of nature, for example. I recognised that the most important differences between the epistemological approaches were in the different views of justification, rationality, truth and science. The epistemological discussion was divided into two sides, distinguished by their differing assumptions in relation to the need for evidence. Adherents of the first (and noticeably smaller) group did not see any epistemological need to reach a universal and common opinion about miracles. I discovered that these kinds of views, which I called non-objectivist, had subjectivist and so-called collectivist views of justification and a contextualist view of rationality. The second (and larger) group was mainly interested in discerning the grounds upon which to establish an objective and conclusive common view in relation to the epistemology of miracles. I called this kind of discussion an objectivist discussion and this kind of approach an evidentialist approach. Most of the evidentialists tried to defend miracles and the others attempted to offer evidence against miracles. Amongst both sides, there were many different variations according to emphasis and assumption over how they saw the possibilities to prove their own view. The common characteristic in all forms of evidentialism was a commitment to an objectivist notion of rationality and a universalistic notion of justification. Most evidentialists put their confidence in science in one way or another. Only a couple of philosophers represented the most moderate version of evidentialism; they tried to remove themselves from the apparent controversy and contextualised the different opinions in order to make some critical comments on them. I called this kind of approach a contextualising form of evidentialism. In the final part of the epistemological chapter, I examined the discussion about the evidential value of miracles, but nothing substantially new was discovered concerning the epistemological views of the authors.
Resumo:
Many educational researchers conducting studies in non-English speaking settings attempt to report on their project in English to boost their scholarly impact. It requires preparing and presenting translations of data collected from interviews and observations. This paper discusses the process and ethical considerations involved in this invisible methodological phase. The process includes activities prior to data analysis and to its presentation to be undertaken by the bilingual researcher as translator in order to convey participants’ original meanings as well as to establish and fulfil translation ethics. This paper offers strategies to address such issues; the most appropriate translation method for qualitative study; and approaches to address political issues when presenting such data.
Resumo:
The variation of electrical resistivity of an insulator-conductor composite, namely, wax-graphite composite, with parameters such as volume fraction, grain size, and temperature has been studied. A model is proposed to explain the observed variations, which assumes that the texture of the composite consists of insulator granules coated with conducting particles. The resistivity of these materials is controlled mainly by the contact resistance between the conducting particles and the number of contacts each particle has with its neighbors. The variation of resistivity with temperature has also been explained with the help of this model and it is attributed to the change in contact area. Journal of Applied Physics is copyrighted by The American Institute of Physics.
Resumo:
Rationing healthcare in some form is inevitable, even in wealthy countries, because resources are scarce and demand for healthcare is always likely to exceed supply. This means that decision-makers must make choices about which health programs and initiatives should receive public funding and which ones should not. These choices are often difficult to make, particularly in Australia, because: - 1 Make explicit rationing based on a national decision-making tool (such as Multi-criteria Decision Analysis) standard process in all jurisdictions. - 2 Develop nationally consistent methods for conducting economic evaluation in health so that good quality evidence on the relative efficiency of various programs and initiatives is generated. - 3 Generate more economic evaluation evidence to inform rationing decisions. - 4 Revise national health performance indicators so that they include true health system efficiency indicators, such as cost-effectiveness. - 5 Apply the Comprehensive Management Framework used to evaluate items on the Medicare Benefits Schedule (MBS) to the Pharmaceutical Benefits Scheme (PBS) and the Prosthesis List to accelerate disinvestment from low-value drugs and prostheses. - 6 Seek agreement among Commonwealth, state and territory governments to work together to undertake work similar to the National Institute for Health and Care Excellence in the United Kingdom and the Canadian Agency for Drugs and Technologies in Health.
Resumo:
The solution of the steady laminar incompressible nonsimilar magneto-hydrodynamic boundary layer flow and heat transfer problem with viscous dissipation for electrically conducting fluids over two-dimensional and axisymmetric bodies with pressure gradient and magnetic field has been presented. The partial differential equations governing the flow have been solved numerically using an implicit finite-difference scheme. The computations have been carried out for flow over a cylinder and a sphere. The results indicate that the magnetic field tends to delay or prevent separation. The heat transfer strongly depends on the viscous dissipation parameter. When the dissipation parameter is positive (i.e. when the temperature of the wall is greater than the freestream temperature) and exceeds a certain value, the hot wall ceases to be cooled by the stream of cooler air because the ‘heat cushion’ provided by the frictional heat prevents cooling whereas the effect of the magnetic field is to remove the ‘heat cushion’ so that the wall continues to be cooled. The results are found to be in good agreement with those of the local similarity and local nonsimilarity methods except near the point of separation, but they are in excellent agreement with those of the difference-differential technique even near the point of separation.
Resumo:
A vast amount of public services and goods are contracted through procurement auctions. Therefore it is very important to design these auctions in an optimal way. Typically, we are interested in two different objectives. The first objective is efficiency. Efficiency means that the contract is awarded to the bidder that values it the most, which in the procurement setting means the bidder that has the lowest cost of providing a service with a given quality. The second objective is to maximize public revenue. Maximizing public revenue means minimizing the costs of procurement. Both of these goals are important from the welfare point of view. In this thesis, I analyze field data from procurement auctions and show how empirical analysis can be used to help design the auctions to maximize public revenue. In particular, I concentrate on how competition, which means the number of bidders, should be taken into account in the design of auctions. In the first chapter, the main policy question is whether the auctioneer should spend resources to induce more competition. The information paradigm is essential in analyzing the effects of competition. We talk of a private values information paradigm when the bidders know their valuations exactly. In a common value information paradigm, the information about the value of the object is dispersed among the bidders. With private values more competition always increases the public revenue but with common values the effect of competition is uncertain. I study the effects of competition in the City of Helsinki bus transit market by conducting tests for common values. I also extend an existing test by allowing bidder asymmetry. The information paradigm seems to be that of common values. The bus companies that have garages close to the contracted routes are influenced more by the common value elements than those whose garages are further away. Therefore, attracting more bidders does not necessarily lower procurement costs, and thus the City should not implement costly policies to induce more competition. In the second chapter, I ask how the auctioneer can increase its revenue by changing contract characteristics like contract sizes and durations. I find that the City of Helsinki should shorten the contract duration in the bus transit auctions because that would decrease the importance of common value components and cheaply increase entry which now would have a more beneficial impact on the public revenue. Typically, cartels decrease the public revenue in a significant way. In the third chapter, I propose a new statistical method for detecting collusion and compare it with an existing test. I argue that my test is robust to unobserved heterogeneity unlike the existing test. I apply both methods to procurement auctions that contract snow removal in schools of Helsinki. According to these tests, the bidding behavior of two of the bidders seems consistent with a contract allocation scheme.
Resumo:
Training for bodybuilding competition is clearly a serious business that inflicts serious demands on the competitor. Not only did Francis commit time and money to compete, but he also arguably put winning before his physical well-being—enduring pain and suffering from his injury. Bodybuilding may seem like an extreme example, but it is not the only activity in which people suffer in pursuit of their goals. Boxers fight each other in the ring; soccer players risk knee and ankle injuries, sometimes playing despite being hurt; and mountaineers risk their lives in dangerous climbs. In the arts there are many examples of people suffering to achieve their goals: Beethoven kept composing, conducting, and performing despite his hearing loss; van Gogh grappled with depression but kept painting, finding fame only posthumously; and Mozart lived the final years of his life impoverished but still composing. These examples show that many great achievements come at a price: severe suffering...
Resumo:
This study explores the decline of terrorism by conducting source-based case studies on two left-wing terrorist campaigns in the 1970s, those of the Rode Jeugd in the Netherlands and the Symbionese Liberation Army in the United States. The purpose of the case studies is to bring more light into the interplay of different external and internal factors in the development of terrorist campaigns. This is done by presenting the history of the two chosen campaigns as narratives from the participants’ points of view, based on interviews with participants and extensive archival material. Organizational resources and dynamics clearly influenced the course of the two campaigns, but in different ways. This divergence derives at least partly from dissimilarities in organizational design and the incentive structure. Comparison of even these two cases shows that organizations using terrorism as a strategy can differ significantly, even when they share ideological orientation, are of the same size and operate in the same time period. Theories on the dynamics of terrorist campaigns would benefit from being more sensitive to this. The study also highlights that the demise of a terrorist organization does not necessarily lead to the decline of the terrorist campaign. Therefore, research should look at the development of terrorist activity beyond the lifespan of a single organization. The collective ideological beliefs and goals functioned primarily as a sustaining force, a lens through which the participants interpreted all developments. On the other hand, it appears that the role of ideology should not be overstated. Namely, not all participants in the campaigns under study fully internalized the radical ideology. Rather, their participation was mainly based on their friendship with other participants. Instead of ideology per se, it is more instructive to look at how those involved described their organization, themselves and their role in the revolutionary struggle. In both cases under study, the choice of the terrorist strategy was not merely a result of a cost-benefit calculation, but an important part of the participants’ self-image. Indeed, the way the groups portrayed themselves corresponded closely with the forms of action that they got involved in. Countermeasures and the lack of support were major reasons for the decline of the campaigns. However, what is noteworthy is that the countermeasures would not have had the same kind of impact had it not been for certain weaknesses of the groups themselves. Moreover, besides the direct impact the countermeasures had on the campaign, equally important was how they affected the attitudes of the larger left-wing community and the public in general. In this context, both the attitudes towards the terrorist campaign and the authorities were relevant to the outcome of the campaigns.
Resumo:
The present study examines empirically the inflation dynamics of the euro area. The focus of the analysis is on the role of expectations in the inflation process. In six articles we relax rationality assumption and proxy expectations directly using OECD forecasts or Consensus Economics survey data. In the first four articles we estimate alternative Phillips curve specifications and find evidence that inflation cannot instantaneously adjust to changes in expectations. A possible departure of expectations from rationality seems not to be powerful enough to totally explain the persistence of euro area inflation in the New Keynesian framework. When expectations are measured directly, the purely forward-looking New Keynesian Phillips curve is outperformed by the hybrid Phillips curve with an additional lagged inflation term and the New Classical Phillips curve with a lagged expectations term. The results suggest that the euro area inflation process has become more forward-looking in the recent years of low and stable inflation. Moreover, in low inflation countries, the inflation dynamics have been more forward-looking already since the late 1970s. We find evidence of substantial heterogeneity of inflation dynamics across the euro area countries. Real time data analysis suggests that in the euro area real time information matters most in the expectations term in the Phillips curve and that the balance of expectations formation is more forward- than backward-looking. Vector autoregressive (VAR) models of actual inflation, inflation expectations and the output gap are estimated in the last two articles.The VAR analysis indicates that inflation expectations, which are relatively persistent, have a significant effect on output. However,expectations seem to react to changes in both output and actual inflation, especially in the medium term. Overall, this study suggests that expectations play a central role in inflation dynamics, which should be taken into account in conducting monetary policy.
Resumo:
The respiratory chain is found in the inner mitochondrial membrane of higher organisms and in the plasma membrane of many bacteria. It consists of several membrane-spanning enzymes, which conserve the energy that is liberated from the degradation of food molecules as an electrochemical proton gradient across the membrane. The proton gradient can later be utilized by the cell for different energy requiring processes, e.g. ATP production, cellular motion or active transport of ions. The difference in proton concentration between the two sides of the membrane is a result of the translocation of protons by the enzymes of the respiratory chain, from the negatively charged (N-side) to the positively charged side (P-side) of the lipid bilayer, against the proton concentration gradient. The endergonic proton transfer is driven by the flow of electrons through the enzymes of the respiratory chain, from low redox-potential electron donors to acceptors of higher potential, and ultimately to oxygen. Cytochrome c oxidase is the last enzyme in the respiratory chain and catalyzes the reduction of dioxygen to water. The redox reaction is coupled to proton transport across the membrane by a yet unresolved mechanism. Cytochrome c oxidase has two proton-conducting pathways through which protons are taken up to the interior part of the enzyme from the N-side of the membrane. The K-pathway transfers merely substrate protons, which are consumed in the process of water formation at the catalytic site. The D-pathway transfers both substrate protons and protons that are pumped to the P-side of the membrane. This thesis focuses on the role of two conserved amino acids in proton translocation by cytochrome c oxidase, glutamate 278 and tryptophan 164. Glu278 is located at the end of the D-pathway and is thought to constitute the branching point for substrate and pumped protons. In this work, it was shown that although Glu278 has an important role in the proton transfer mechanism, its presence is not an obligatory requirement. Alternative structural solutions in the area around Glu278, much like the ones present in some distantly related heme-copper oxidases, could in the absence of Glu278 support the formation of a long hydrogen-bonded water chain through which proton transfer from the D-pathway to the catalytic site is possible. The other studied amino acid, Trp164, is hydrogen bonded to the ∆-propionate of heme a3 of the catalytic site. Mutation of this amino acid showed that it may be involved in regulation of proton access to a proton acceptor, a pump site, from which the proton later is expelled to the P-side of the membrane. The ion pair that is formed by the ∆-propionate of heme a3 and arginine 473 is likely to form a gate-like structure, which regulates proton mobility to the P-side of the membrane. The same gate may also be part of an exit path through which water molecules produced at the catalytically active site are removed towards the external side of the membrane. Time-resolved optical and electrometrical experiments with the Trp164 to phenylalanine mutant revealed a so far undetected step in the proton pumping mechanism. During the A to PR transition of the catalytic cycle, a proton is transferred from Glu278 to the pump site, located somewhere in the vicinity of the ∆-propionate of heme a3. A mechanism for proton pumping by cytochrome c oxidase is proposed on the basis of the presented results and the mechanism is discussed in relation to some relevant experimental data. A common proton pumping mechanism for all members of the heme-copper oxidase family is moreover considered.
Resumo:
The correct localization of proteins is essential for cell viability. In order to achieve correct protein localization to cellular membranes, conserved membrane targeting and translocation mechanisms have evolved. The focus of this work was membrane targeting and translocation of a group of proteins that circumvent the known targeting and translocation mechanisms, the C-tail anchored protein family. Members of this protein family carry out a wide range of functions, from protein translocation and recognition events preceding membrane fusion, to the regulation of programmed cell death. In this work, the mechanisms of membrane insertion and targeting of two C-tail anchored proteins were studied utilizing in vivo and in vitro methods, in yeast and mammalian cell systems. The proteins studied were cytochrome b(5), a well characterized C-tail anchored model protein, and N-Bak, a novel member of the Bcl-2 family of regulators of programmed cell death. Membrane insertion of cytochrome b(5) into the endoplasmic reticulum membrane was found to occur independently of the known protein conducting channels, through which signal peptide-containing polypeptides are translocated. In fact, the membrane insertion process was independent of any protein components and did not require energy. Instead membrane insertion was observed to be dependent on the lipid composition of the membrane. The targeting of N-Bak was found to depend on the cellular context. Either the mitochondrial or endoplasmic reticulum membranes were targeted, which resulted in morphological changes of the target membranes. These findings indicate the existence of a novel membrane insertion mechanism for C-tail anchored proteins, in which membrane integration of the transmembrane domain, and the translocation of C-terminal fragments, appears to be spontaneous. This mode of membrane insertion is regulated by the target membrane fluidity, which depends on the lipid composition of the bilayer, and the hydrophobicity of the transmembrane domain of the C-tail anchored protein, as well as by the availability of the C-tail for membrane integration. Together these mechanisms enable the cell to achieve spatial and temporal regulation of sub-cellular localization of C-tail anchored proteins.