423 resultados para STEPS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is about the derivation of the addition law on an arbitrary elliptic curve and efficiently adding points on this elliptic curve using the derived addition law. The outcomes of this research guarantee practical speedups in higher level operations which depend on point additions. In particular, the contributions immediately find applications in cryptology. Mastered by the 19th century mathematicians, the study of the theory of elliptic curves has been active for decades. Elliptic curves over finite fields made their way into public key cryptography in late 1980’s with independent proposals by Miller [Mil86] and Koblitz [Kob87]. Elliptic Curve Cryptography (ECC), following Miller’s and Koblitz’s proposals, employs the group of rational points on an elliptic curve in building discrete logarithm based public key cryptosystems. Starting from late 1990’s, the emergence of the ECC market has boosted the research in computational aspects of elliptic curves. This thesis falls into this same area of research where the main aim is to speed up the additions of rational points on an arbitrary elliptic curve (over a field of large characteristic). The outcomes of this work can be used to speed up applications which are based on elliptic curves, including cryptographic applications in ECC. The aforementioned goals of this thesis are achieved in five main steps. As the first step, this thesis brings together several algebraic tools in order to derive the unique group law of an elliptic curve. This step also includes an investigation of recent computer algebra packages relating to their capabilities. Although the group law is unique, its evaluation can be performed using abundant (in fact infinitely many) formulae. As the second step, this thesis progresses the finding of the best formulae for efficient addition of points. In the third step, the group law is stated explicitly by handling all possible summands. The fourth step presents the algorithms to be used for efficient point additions. In the fifth and final step, optimized software implementations of the proposed algorithms are presented in order to show that theoretical speedups of step four can be practically obtained. In each of the five steps, this thesis focuses on five forms of elliptic curves over finite fields of large characteristic. A list of these forms and their defining equations are given as follows: (a) Short Weierstrass form, y2 = x3 + ax + b, (b) Extended Jacobi quartic form, y2 = dx4 + 2ax2 + 1, (c) Twisted Hessian form, ax3 + y3 + 1 = dxy, (d) Twisted Edwards form, ax2 + y2 = 1 + dx2y2, (e) Twisted Jacobi intersection form, bs2 + c2 = 1, as2 + d2 = 1, These forms are the most promising candidates for efficient computations and thus considered in this work. Nevertheless, the methods employed in this thesis are capable of handling arbitrary elliptic curves. From a high level point of view, the following outcomes are achieved in this thesis. - Related literature results are brought together and further revisited. For most of the cases several missed formulae, algorithms, and efficient point representations are discovered. - Analogies are made among all studied forms. For instance, it is shown that two sets of affine addition formulae are sufficient to cover all possible affine inputs as long as the output is also an affine point in any of these forms. In the literature, many special cases, especially interactions with points at infinity were omitted from discussion. This thesis handles all of the possibilities. - Several new point doubling/addition formulae and algorithms are introduced, which are more efficient than the existing alternatives in the literature. Most notably, the speed of extended Jacobi quartic, twisted Edwards, and Jacobi intersection forms are improved. New unified addition formulae are proposed for short Weierstrass form. New coordinate systems are studied for the first time. - An optimized implementation is developed using a combination of generic x86-64 assembly instructions and the plain C language. The practical advantages of the proposed algorithms are supported by computer experiments. - All formulae, presented in the body of this thesis, are checked for correctness using computer algebra scripts together with details on register allocations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thermogravimetric analysis-mass spectrometry, X-ray diffraction and scanning electron microscopy (SEM) were used to characterize eight kaolinite samples from China. The results show that the thermal decomposition occurs in three main steps (a) desorption of water below 100 °C, (b) dehydration at about 225 °C, (c) well defined dehydroxylation at around 450 °C. It is also found that decarbonization took place at 710 °C due to the decomposition of calcite impurity in kaolin. The temperature of dehydroxylation of kaolinite is found to be influenced by the degree of disorder of the kaolinite structure and the gases evolved in the decomposition process can be various because of the different amount and kinds of impurities. It is evident by the mass spectra that the interlayer carbonate from impurity of calcite and organic carbon is released as CO2 around 225, 350 and 710 °C in the kaolinite samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two kinds of coal-bearing kaolinite from China were analysed by X-ray diffraction (XRD), Thermogravimetric analysis-mass spectrometry (TG-MS), infrared emission spectroscopy. Thermal decomposition occurs in a series of steps attributed to (a) desorption of water at 68 °C for Datong coal bearing strata kaolinite and 56 °C for Xiaoxian with mass losses of 0.36 % and 0.51 % (b) decarbonization at 456 °C for Datong coal bearing strata kaolinite and 431 °C for Xiaoxian kaolinite, (c) dehydroxylation takes place in two steps at 589 and 633 °C for Datong coal bearing strata kaolinite and at 507 °C and 579 °C for Xiaoxian kaolinite. This mineral were further characterised by infrared emission spectroscopy (IES). Well defined hydroxyl stretching bands at around 3695, 3679, 3652 and 3625 cm-1 are observed. At 650 °C all intensity in these bands is lost in harmony with the thermal analysis results. Characteristic functional groups from coal are observed at 1918, 1724 and 1459 cm-1. The intensity of these bands decrease by thermal treatment and is lost by 700 °C.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim. The paper is a report of a study to demonstrate how the use of schematics can provide procedural clarity and promote rigour in the conduct of case study research. Background. Case study research is a methodologically flexible approach to research design that focuses on a particular case – whether an individual, a collective or a phenomenon of interest. It is known as the 'study of the particular' for its thorough investigation of particular, real-life situations and is gaining increased attention in nursing and social research. However, the methodological flexibility it offers can leave the novice researcher uncertain of suitable procedural steps required to ensure methodological rigour. Method. This article provides a real example of a case study research design that utilizes schematic representation drawn from a doctoral study of the integration of health promotion principles and practices into a palliative care organization. Discussion. The issues discussed are: (1) the definition and application of case study research design; (2) the application of schematics in research; (3) the procedural steps and their contribution to the maintenance of rigour; and (4) the benefits and risks of schematics in case study research. Conclusion. The inclusion of visual representations of design with accompanying explanatory text is recommended in reporting case study research methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

• The doctrine of double effect is an exception to the general rule that taking active steps that end life is unlawful. • The essence of the doctrine at common law is intention. • Hastening a patient’s death through palliative care will be lawful provided the primary intention is to relieve pain, and not cause death, even if that death is foreseen. • Some States have enacted legislative excuses that deal with the provision of palliative care. • These statutory excuses tend to be stricter than the common law as they impose other requirements in addition to having an appropriate intent, such as adherence to some level of recognised medical practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Draglines are massive machines commonly used in surface mining to strip overburden, revealing the targeted minerals for extraction. Automating some or all of the phases of operation of these machines offers the potential for significant productivity and maintenance benefits. The mining industry has a history of slow uptake of automation systems due to the challenges contained in the harsh, complex, three-dimensional (3D), dynamically changing mine operating environment. Robotics as a discipline is finally starting to gain acceptance as a technology with the potential to assist mining operations. This article examines the evolution of robotic technologies applied to draglines in the form of machine embedded intelligent systems. Results from this work include a production trial in which 250,000 tons of material was moved autonomously, experiments demonstrating steps towards full autonomy, and teleexcavation experiments in which a dragline in Australia was tasked by an operator in the United States.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Insight into the unique structure of layered double hydroxides has been obtained using a combination of X-ray diffraction and thermal analysis. Indium containing hydrotalcites of formula Mg4In2(CO3)(OH)12•4H2O (2:1 In-LDH) through to Mg8In2(CO3)(OH)18•4H2O (4:1 In-LDH) with variation in the Mg:In ratio have been successfully synthesised. The d(003) spacing varied from 7.83 Å for the 2:1 LDH to 8.15 Å for the 3:1 indium containing layered double hydroxide. Distinct mass loss steps attributed to dehydration, dehydroxylation and decarbonation are observed for the indium containing hydrotalcite. Dehydration occurs over the temperature range ambient to 205 °C. Dehydroxylation takes place in a series of steps over the 238 to 277 °C temperature range. Decarbonation occurs between 763 and 795 °C. The dehydroxylation and decarbonation steps depend upon the Mg:In ratio. The formation of indium containing hydrotalcites and their thermal activation provides a method for the synthesis of indium oxide based catalysts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speeding remains a significant contributing factor to road trauma internationally, despite increasingly sophisticated speed management strategies being adopted around the world. Increases in travel speed are associated with increases in crash risk and crash severity. As speed choice is a voluntary behaviour, driver perceptions are important to our understanding of speeding and, importantly, to designing effective behavioural countermeasures. The four studies conducted in this program of research represent a comprehensive approach to examining psychosocial influences on driving speeds in two countries that are at very different levels of road safety development: Australia and China. Akers’ social learning theory (SLT) was selected as the theoretical framework underpinning this research and guided the development of key research hypotheses. This theory was chosen because of its ability to encompass psychological, sociological, and criminological perspectives in understanding behaviour, each of which has relevance to speeding. A mixed-method design was used to explore the personal, social, and legal influences on speeding among car drivers in Queensland (Australia) and Beijing (China). Study 1 was a qualitative exploration, via focus group interviews, of speeding among 67 car drivers recruited from south east Queensland. Participants were assigned to groups based on their age and gender, and additionally, according to whether they self-identified as speeding excessively or rarely. This study aimed to elicit information about how drivers conceptualise speeding as well as the social and legal influences on driving speeds. The findings revealed a wide variety of reasons and circumstances that appear to be used as personal justifications for exceeding speed limits. Driver perceptions of speeding as personally and socially acceptable, as well as safe and necessary were common. Perceptions of an absence of danger associated with faster driving speeds were evident, particularly with respect to driving alone. An important distinction between the speed-based groups related to the attention given to the driving task. Rare speeders expressed strong beliefs about the need to be mindful of safety (self and others) while excessive speeders referred to the driving task as automatic, an absent-minded endeavour, and to speeding as a necessity in order to remain alert and reduce boredom. For many drivers in this study, compliance with speed limits was expressed as discretionary rather than mandatory. Social factors, such as peer and parental influence were widely discussed in Study 1 and perceptions of widespread community acceptance of speeding were noted. In some instances, the perception that ‘everybody speeds’ appeared to act as one rationale for the need to raise speed limits. Self-presentation, or wanting to project a positive image of self was noted, particularly with respect to concealing speeding infringements from others to protect one’s image as a trustworthy and safe driver. The influence of legal factors was also evident. Legal sanctions do not appear to influence all drivers to the same extent. For instance, fear of apprehension appeared to play a role in reducing speeding for many, although previous experiences of detection and legal sanctions seemed to have had limited influence on reducing speeding among some drivers. Disregard for sanctions (e.g., driving while suspended), fraudulent demerit point use, and other strategies to avoid detection and punishment were widely and openly discussed. In Study 2, 833 drivers were recruited from roadside service stations in metropolitan and regional locations in Queensland. A quantitative research strategy assessed the relative contribution of personal, social, and legal factors to recent and future self-reported speeding (i.e., frequency of speeding and intentions to speed in the future). Multivariate analyses examining a range of factors drawn from SLT revealed that factors including self-identity (i.e., identifying as someone who speeds), favourable definitions (attitudes) towards speeding, personal experiences of avoiding detection and punishment for speeding, and perceptions of family and friends as accepting of speeding were all significantly associated with greater self-reported speeding. Study 3 was an exploratory, qualitative investigation of psychosocial factors associated with speeding among 35 Chinese drivers who were recruited from the membership of a motoring organisation and a university in Beijing. Six focus groups were conducted to explore similar issues to those examined in Study 1. The findings of Study 3 revealed many similarities with respect to the themes that arose in Australia. For example, there were similarities regarding personal justifications for speeding, such as the perception that posted limits are unreasonably low, the belief that individual drivers are able to determine safe travel speeds according to personal comfort with driving fast, and the belief that drivers possess adequate skills to control a vehicle at high speed. Strategies to avoid detection and punishment were also noted, though they appeared more widespread in China and also appeared, in some cases, to involve the use of a third party, a topic that was not reported by Australian drivers. Additionally, higher perceived enforcement tolerance thresholds were discussed by Chinese participants. Overall, the findings indicated perceptions of a high degree of community acceptance of speeding and a perceived lack of risk associated with speeds that were well above posted speed limits. Study 4 extended the exploratory research phase in China with a quantitative investigation involving 299 car drivers recruited from car washes in Beijing. Results revealed a relatively inexperienced sample with less than 5 years driving experience, on average. One third of participants perceived that the certainty of penalties when apprehended was low and a similar proportion of Chinese participants reported having previously avoided legal penalties when apprehended for speeding. Approximately half of the sample reported that legal penalties for speeding were ‘minimally to not at all’ severe. Multivariate analyses revealed that past experiences of avoiding detection and punishment for speeding, as well as favourable attitudes towards speeding, and perceptions of strong community acceptance of speeding were most strongly associated with greater self-reported speeding in the Chinese sample. Overall, the results of this research make several important theoretical contributions to the road safety literature. Akers’ social learning theory was found to be robust across cultural contexts with respect to speeding; similar amounts of variance were explained in self-reported speeding in the quantitative studies conducted in Australia and China. Historically, SLT was devised as a theory of deviance and posits that deviance and conformity are learned in the same way, with the balance of influence stemming from the ways in which behaviour is rewarded and punished (Akers, 1998). This perspective suggests that those who speed and those who do not are influenced by the same mechanisms. The inclusion of drivers from both ends of the ‘speeding spectrum’ in Study 1 provided an opportunity to examine the wider utility of SLT across the full range of the behaviour. One may question the use of a theory of deviance to investigate speeding, a behaviour that could, arguably, be described as socially acceptable and prevalent. However, SLT seemed particularly relevant to investigating speeding because of its inclusion of association, imitation, and reinforcement variables which reflect the breadth of factors already found to be potentially influential on driving speeds. In addition, driving is a learned behaviour requiring observation, guidance, and practice. Thus, the reinforcement and imitation concepts are particularly relevant to this behaviour. Finally, current speed management practices are largely enforcement-based and rely on the principles of behavioural reinforcement captured within the reinforcement component of SLT. Thus, the application of SLT to a behaviour such as speeding offers promise in advancing our understanding of the factors that influence speeding, as well as extending our knowledge of the application of SLT. Moreover, SLT could act as a valuable theoretical framework with which to examine other illegal driving behaviours that may not necessarily be seen as deviant by the community (e.g., mobile phone use while driving). This research also made unique contributions to advancing our understanding of the key components and the overall structure of Akers’ social learning theory. The broader SLT literature is lacking in terms of a thorough structural understanding of the component parts of the theory. For instance, debate exists regarding the relevance of, and necessity for including broader social influences in the model as captured by differential association. In the current research, two alternative SLT models were specified and tested in order to better understand the nature and extent of the influence of differential association on behaviour. Importantly, the results indicated that differential association was able to make a unique contribution to explaining self-reported speeding, thereby negating the call to exclude it from the model. The results also demonstrated that imitation was a discrete theoretical concept that should also be retained in the model. The results suggest a need to further explore and specify mechanisms of social influence in the SLT model. In addition, a novel approach was used to operationalise SLT variables by including concepts drawn from contemporary social psychological and deterrence-based research to enhance and extend the way that SLT variables have traditionally been examined. Differential reinforcement was conceptualised according to behavioural reinforcement principles (i.e., positive and negative reinforcement and punishment) and incorporated concepts of affective beliefs, anticipated regret, and deterrence-related concepts. Although implicit in descriptions of SLT, little research has, to date, made use of the broad range of reinforcement principles to understand the factors that encourage or inhibit behaviour. This approach has particular significance to road user behaviours in general because of the deterrence-based nature of many road safety countermeasures. The concept of self-identity was also included in the model and was found to be consistent with the definitions component of SLT. A final theoretical contribution was the specification and testing of a full measurement model prior to model testing using structural equation modelling. This process is recommended in order to reduce measurement error by providing an examination of the psychometric properties of the data prior to full model testing. Despite calls for such work for a number of decades, the current work appears to be the only example of a full measurement model of SLT. There were also a number of important practical implications that emerged from this program of research. Firstly, perceptions regarding speed enforcement tolerance thresholds were highlighted as a salient influence on driving speeds in both countries. The issue of enforcement tolerance levels generated considerable discussion among drivers in both countries, with Australian drivers reporting lower perceived tolerance levels than Chinese drivers. It was clear that many drivers used the concept of an enforcement tolerance in determining their driving speed, primarily with the desire to drive faster than the posted speed limit, yet remaining within a speed range that would preclude apprehension by police. The quantitative results from Studies 2 and 4 added support to these qualitative findings. Together, the findings supported previous research and suggested that a travel speed may not be seen as illegal until that speed reaches a level over the prescribed enforcement tolerance threshold. In other words, the enforcement tolerance appears to act as a ‘de facto’ speed limit, replacing the posted limit in the minds of some drivers. The findings from the two studies conducted in China (Studies 2 and 4) further highlighted the link between perceived enforcement tolerances and a ‘de facto’ speed limit. Drivers openly discussed driving at speeds that were well above posted speed limits and some participants noted their preference for driving at speeds close to ‘50% above’ the posted limit. This preference appeared to be shaped by the perception that the same penalty would be imposed if apprehended, irrespective of what speed they travelling (at least up to 50% above the limit). Further research is required to determine whether the perceptions of Chinese drivers are mainly influenced by the Law of the People’s Republic of China or by operational practices. Together, the findings from both studies in China indicate that there may be scope to refine enforcement tolerance levels, as has happened in other jurisdictions internationally over time, in order to reduce speeding. Any attempts to do so would likely be assisted by the provision of information about the legitimacy and purpose of speed limits as well as risk factors associated with speeding because these issues were raised by Chinese participants in the qualitative research phase. Another important practical implication of this research for speed management in China is the way in which penalties are determined. Chinese drivers described perceptions of unfairness and a lack of transparency in the enforcement system because they were unsure of the penalty that they would receive if apprehended. Steps to enhance the perceived certainty and consistency of the system to promote a more equitable approach to detection and punishment would appear to be welcomed by the general driving public and would be more consistent with the intended theoretical (deterrence) basis that underpins the current speed enforcement approach. The use of mandatory, fixed penalties may assist in this regard. In many countries, speeding attracts penalties that are dependent on the severity of the offence. In China, there may be safety benefits gained from the introduction of a similar graduated scale of speeding penalties and fixed penalties might also help to address the issue of uncertainty about penalties and related perceptions of unfairness. Such advancements would be in keeping with the principles of best practice for speed management as identified by the World Health Organisation. Another practical implication relating to legal penalties, and applicable to both cultural contexts, relates to the issues of detection and punishment avoidance. These two concepts appeared to strongly influence speeding in the current samples. In Australia, detection avoidance strategies reported by participants generally involved activities that are not illegal (e.g., site learning and remaining watchful for police vehicles). The results from China were similar, although a greater range of strategies were reported. The most common strategy reported in both countries for avoiding detection when speeding was site learning, or familiarisation with speed camera locations. However, a range of illegal practices were also described by Chinese drivers (e.g., tampering with or removing vehicle registration plates so as to render the vehicle unidentifiable on camera and use of in-vehicle radar detectors). With regard to avoiding punishment when apprehended, a range of strategies were reported by drivers from both countries, although a greater range of strategies were reported by Chinese drivers. As the results of the current research indicated that detection avoidance was strongly associated with greater self-reported speeding in both samples, efforts to reduce avoidance opportunities are strongly recommended. The practice of randomly scheduling speed camera locations, as is current practice in Queensland, offers one way to minimise site learning. The findings of this research indicated that this practice should continue. However, they also indicated that additional strategies are needed to reduce opportunities to evade detection. The use of point-to-point speed detection (also known as sectio

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increase of buyer-driven supply chains, outsourcing and other forms of non-traditional employment has resulted in challenges for labour market regulation. One business model which has created substantial regulatory challenges is supply chains. The supply chain model involves retailers purchasing products from brand corporations who then outsource the manufacturing of the work to traders who contract with factories or outworkers who actually manufacture the clothing and textiles. This business model results in time and cost pressures being pushed down the supply chain which has resulted in sweatshops where workers systematically have their labour rights violated. Literally millions of workers work in dangerous workplaces where thousands are killed or permanently disabled every year. This thesis has analysed possible regulatory responses to provide workers a right to safety and health in supply chains which provide products for Australian retailers. This thesis will use a human rights standard to determine whether Australia is discharging its human rights obligations in its approach to combating domestic and foreign labour abuses. It is beyond this thesis to analyse Occupational Health and Safety (OHS) laws in every jurisdiction. Accordingly, this thesis will focus upon Australian domestic laws and laws in one of Australia’s major trading partners, the Peoples’ Republic of China (China). It is hypothesised that Australia is currently breaching its human rights obligations through failing to adequately regulate employees’ safety at work in Australian-based supply chains. To prove this hypothesis, this thesis will adopt a three- phase approach to analysing Australia’s regulatory responses. Phase 1 will identify the standard by which Australia’s regulatory approach to employees’ health and safety in supply chains can be judged. This phase will focus on analysing how workers’ rights to safety as a human right imposes a moral obligation on Australia to take reasonablely practicable steps regulate Australian-based supply chains. This will form a human rights standard against which Australia’s conduct can be judged. Phase 2 focuses upon the current regulatory environment. If existing regulatory vehicles adequately protect the health and safety of employees, then Australia will have discharged its obligations through simply maintaining the status quo. Australia currently regulates OHS through a combination of ‘hard law’ and ‘soft law’ regulatory vehicles. The first part of phase 2 analyses the effectiveness of traditional OHS laws in Australia and in China. The final part of phase 2 then analyses the effectiveness of the major soft law vehicle ‘Corporate Social Responsibility’ (CSR). The fact that employees are working in unsafe working conditions does not mean Australia is breaching its human rights obligations. Australia is only required to take reasonably practicable steps to ensure human rights are realized. Phase 3 identifies four regulatory vehicles to determine whether they would assist Australia in discharging its human rights obligations. Phase 3 then analyses whether Australia could unilaterally introduce supply chain regulation to regulate domestic and extraterritorial supply chains. Phase 3 also analyses three public international law regulatory vehicles. This chapter considers the ability of the United Nations Global Compact, the ILO’s Better Factory Project and a bilateral agreement to improve the detection and enforcement of workers’ right to safety and health.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Throughout history, developments in medicine have aimed to improve patient quality of life, and reduce the trauma associated with surgical treatment. Surgical access to internal organs and bodily structures has been traditionally via large incisions. Endoscopic surgery presents a technique for surgical access via small (1 Omm) incisions by utilising a scope and camera for visualisation of the operative site. Endoscopy presents enormous benefits for patients in terms of lower post operative discomfort, and reduced recovery and hospitalisation time. Since the first gall bladder extraction operation was performed in France in 1987, endoscopic surgery has been embraced by the international medical community. With the adoption of the new technique, new problems never previously encountered in open surgery, were revealed. One such problem is that the removal of large tissue specimens and organs is restricted by the small incision size. Instruments have been developed to address this problem however none of the devices provide a totally satisfactory solution. They have a number of critical weaknesses: -The size of the access incision has to be enlarged, thereby compromising the entire endoscopic approach to surgery. - The physical quality of the specimen extracted is very poor and is not suitable to conduct the necessary post operative pathological examinations. -The safety of both the patient and the physician is jeopardised. The problem of tissue and organ extraction at endoscopy is investigated and addressed. In addition to background information covering endoscopic surgery, this thesis describes the entire approach to the design problem, and the steps taken before arriving at the final solution. This thesis contributes to the body of knowledge associated with the development of endoscopic surgical instruments. A new product capable of extracting large tissue specimens and organs in endoscopy is the final outcome of the research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study, to elucidate the role of des(1-3)IGF-I in the maturation of IGF-I,used two strategies. The first was to detect the presence of enzymes in tissues, which would act on IGF-I to produce des(1-3)IGF-I, and the second was to detect the potential products of such enzymic activity, namely Gly-Pro-Glu(GPE), Gly-Pro(GP) and des(l- 3)IGF-I. No neutral tripeptidyl peptidase (TPP II), which would release the tripeptide GPE from IGF-I, was detected in brain, urine nor in red or white blood cells. The TPPlike activity which was detected, was attributed to a combined action of a dipeptidyl peptidase (DPP N) and an aminopeptidase (AP A). A true TPP II was, however, detected in platelets. Two purified TPP II enzymes were investigated but they did not release GPE from IGF-I under a variety of conditions. Consequently, TPP II seemed unlikely to participate in the formation of des(1-3)IGF-I. In contrast, an acidic tripeptidyl peptidase activity (TPP I) was detected in brain and colostrum, the former with a pH optimum of 4.5 and the latter 3.8. It seems likely that such an enzyme would participate in the formation of des( 1-3 )IGF-I in these tissues in vitro, ie. that des(1-3)IGF-I may have been produced as an artifact in the isolation of IGF-I from brain and colostrum in acidic conditions. This contrasts with suggestions of an in vivo role for des(1-3)IGF-I, as reported by others. The activity of a dipeptidyl peptidase N (DPP N) from urine, which should release the dipeptide GP from IGF-I, was assessed under a variety of conditions and with a variety of additives and potential enzyme stimulants, but there was no release of GP. The DPP N also exhibited a transferase activity with synthetic substrates in the presence of dipeptides, at lower concentrations than previously reported for other acceptors or other proteolytic enzymes. In addition, a low concentration of a product,possibly the tetrapeptide Gly-Pro-Gly-Leu, was detected with the action of the enzyme on IGF-I in the presence of the dipeptide Gly-Leu. As part of attempts to detect tissue production of des(1-3)IGF-I, a monoclonal antibody (MAb ), directed towards the GPE- end ofiGF-I was produced by immunisation with a 10-mer covalently attached to a carrier protein. By the use of indirect ELISA and inhibitor studies, the MAb was shown to selectively recognise peptides with anNterminal GPE- sequence, and applied to the indirect detection of des(1-3)IGF-I. The concentration of GPE in brain, measured by mass spectrometry ( MS), was low, and the concentration of total IGF-I (measured by ELISA with a commercial polyclonal antibody [P Ab]) was 40 times higher at 50 nmol/kg. This also, was not consistent with the action of a tripeptidyl peptidase in brain that converted all IGF-I to des(1-3)IGF-I plus GPE. Contrasting ELISA results, using the MAb prepared in this study, suggest an even higher concentration of intact IGF-I of 150 nmollkg. This would argue against the presence of any des( 1-3 )IGF-I in brain, but in turn, this indicates either the presence of other substances containing a GPE amino-terminus or other cross reacting epitope. Although the results of the specificity studies reported in Chapter 5 would make this latter possibility seem unlikely, it cannot be completely excluded. No GP was detected in brain by MS. No GPE was detected in colostrum by capillary electrophoresis (CE) but the interference from extraneous substances reduced the detectability of GPE by CE and this approach would require further, prior, purification and concentration steps. A molecule, with a migration time equal to that of the peptide GP, was detected in colostrum by CE, but the concentration (~ 10 11mo/L) was much higher than the IGF-I concentration measured by radio-immunoassay using a PAb (80 nmol/L) or using a Mab (300-400 nmolL). A DPP IV enzyme was detected in colostrum and this could account for the GP, derived from substrates other than IGF-1. Based on the differential results of the two antibody assays, there was no indication of the presence of des(1-3)IGF-I in brain or colostrum. In the absence of any enzyme activity directed towards the amino terminus of IGF-I and the absence any potential products, IGF-I, therefore, does not appear to "mature" via des(1-3)IGF-I in the brain, nor in the neutral colostrum. In spite of these results which indicate the absence of an enzymic attack on IGF-I and the absence of the expected products in tissues, the possibility that the conversion of IGF-I may occur in neutral conditions in limited amounts, cannot be ruled out. It remains possible that in the extracellular environment of the membrane, a complex interaction of IGF-I, binding protein, aminopeptidase(s) and receptor, produces des(1- 3)IGF-I as a transient product which is bound to the receptor and internalised.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Assessment of the condition of connectors in the overhead electricity network has traditionally relied on the heat dissipation or voltage drop from existing load current (50Hz) as a measurable parameter to differentiate between satisfactory and failing connectors. This research has developed a technique which does not rely on the 50Hz current and a prototype connector tester has been developed. In this system a high frequency signal is injected into the section of line under test and measures the resistive voltage drop and the current at the test frequency to yield the resistance in micro-ohms. From the value of resistance a decision as to whether a connector is satisfactory or approaching failure can be made. Determining the resistive voltage drop in the presence of a large induced voltage was achieved by the innovative approach of using a representative sample of the magnetic flux producing the induced voltage as the phase angle reference for the signal processing rather than the phase angle of the current, which can be affected by the presence of nearby metal objects. Laboratory evaluation of the connector tester has validated the measurement technique. The magnitude of the load current (50Hz) has minimal effect on the measurement accuracy. Addition of a suitable battery based power supply system and isolated communications, probably radio and refinement of the printed circuit board design and software are the remaining development steps to a production instrument.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inspection of solder joints has been a critical process in the electronic manufacturing industry to reduce manufacturing cost, improve yield, and ensure product quality and reliability. The solder joint inspection problem is more challenging than many other visual inspections because of the variability in the appearance of solder joints. Although many research works and various techniques have been developed to classify defect in solder joints, these methods have complex systems of illumination for image acquisition and complicated classification algorithms. An important stage of the analysis is to select the right method for the classification. Better inspection technologies are needed to fill the gap between available inspection capabilities and industry systems. This dissertation aims to provide a solution that can overcome some of the limitations of current inspection techniques. This research proposes two inspection steps for automatic solder joint classification system. The “front-end” inspection system includes illumination normalisation, localization and segmentation. The illumination normalisation approach can effectively and efficiently eliminate the effect of uneven illumination while keeping the properties of the processed image. The “back-end” inspection involves the classification of solder joints by using Log Gabor filter and classifier fusion. Five different levels of solder quality with respect to the amount of solder paste have been defined. Log Gabor filter has been demonstrated to achieve high recognition rates and is resistant to misalignment. Further testing demonstrates the advantage of Log Gabor filter over both Discrete Wavelet Transform and Discrete Cosine Transform. Classifier score fusion is analysed for improving recognition rate. Experimental results demonstrate that the proposed system improves performance and robustness in terms of classification rates. This proposed system does not need any special illumination system, and the images are acquired by an ordinary digital camera. In fact, the choice of suitable features allows one to overcome the problem given by the use of non complex illumination systems. The new system proposed in this research can be incorporated in the development of an automated non-contact, non-destructive and low cost solder joint quality inspection system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A software tool (DRONE) has been developed to evaluate road traffic noise in a large area with the consideration of network dynamic traffic flow and the buildings. For more precise estimation of noise in urban network where vehicles are mainly in stop and go running conditions, vehicle sound power level (for acceleration/deceleration cruising and ideal vehicle) is incorporated in DRONE. The calculation performance of DRONE is increased by evaluating the noise in two steps of first estimating the unit noise database and then integrating it with traffic simulation. Details of the process from traffic simulation to contour maps are discussed in the paper and the implementation of DRONE on Tsukuba city is presented

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives: This methodological paper reports on the development and validation of a work sampling instrument and data collection processes to conduct a national study of nurse practitioners’ work patterns. ---------- Design: Published work sampling instruments provided the basis for development and validation of a tool for use in a national study of nurse practitioner work activities across diverse contextual and clinical service models. Steps taken in the approach included design of a nurse practitioner-specific data collection tool and development of an innovative web-based program to train and establish inter rater reliability of a team of data collectors who were geographically dispersed across metropolitan, rural and remote health care settings. ---------- Setting: The study is part of a large funded study into nurse practitioner service. The Australian Nurse Practitioner Study is a national study phased over three years and was designed to provide essential information for Australian health service planners, regulators and consumer groups on the profile, process and outcome of nurse practitioner service. ---------- Results: The outcome if this phase of the study is empirically tested instruments, process and training materials for use in an international context by investigators interested in conducting a national study of nurse practitioner work practices. ---------- Conclusion: Development and preparation of a new approach to describing nurse practitioner practices using work sampling methods provides the groundwork for international collaboration in evaluation of nurse practitioner service.