346 resultados para Transmission problem
Resumo:
In the paper, the flow-shop scheduling problem with parallel machines at each stage (machine center) is studied. For each job its release and due date as well as a processing time for its each operation are given. The scheduling criterion consists of three parts: the total weighted earliness, the total weighted tardiness and the total weighted waiting time. The criterion takes into account the costs of storing semi-manufactured products in the course of production and ready-made products as well as penalties for not meeting the deadlines stated in the conditions of the contract with customer. To solve the problem, three constructive algorithms and three metaheuristics (based one Tabu Search and Simulated Annealing techniques) are developed and experimentally analyzed. All the proposed algorithms operate on the notion of so-called operation processing order, i.e. the order of operations on each machine. We show that the problem of schedule construction on the base of a given operation processing order can be reduced to the linear programming task. We also propose some approximation algorithm for schedule construction and show the conditions of its optimality.
Resumo:
Interdisciplinary studies are fundamental to the signature practices for the middle years of schooling. Middle years researchers claim that interdisciplinarity in teaching appropriately meets the needs of early adolescents by tying concepts together, providing frameworks for the relevance of knowledge, and demonstrating the linking of disparate information for solution of novel problems. Cognitive research is not wholeheartedly supportive of this position. Learning theorists assert that application of knowledge in novel situations for the solution of problems is actually dependent on deep discipline based understandings. The present research contrasts the capabilities of early adolescent students from discipline based and interdisciplinary based curriculum schooling contexts to successfully solve multifaceted real world problems. This will inform the development of effective management of middle years of schooling curriculum.
Resumo:
Vehicular ad hoc network (VANET) is a wireless ad hoc network that operates in a vehicular environment to provide communication between vehicles. VANET can be used by a diverse range of applications to improve road safety. Cooperative collision warning system (CCWS) is one of the safety applications that can provide situational awareness and warning to drivers by exchanging safety messages between cooperative vehicles. Currently, the routing strategies for safety message dissemination in CCWS are scoped broadcast. However, the broadcast schemes are not efficient as a warning message is sent to a large number of vehicles in the area, rather than only the endangered vehicles. They also cannot prioritize the receivers based on their critical time to avoid collision. This paper presents a more efficient multicast routing scheme that can reduce unnecessary transmissions and also use adaptive transmission range. The multicast scheme involves methods to identify an abnormal vehicle, the vehicles that may be endangered by the abnormal vehicle, and the latest time for each endangered vehicle to receive the warning message in order to avoid the danger. We transform this multicast routing problem into a delay-constrained minimum Steiner tree problem. Therefore, we can use existing algorithms to solve the problem. The advantages of our multicast routing scheme are mainly its potential to support various road traffic scenarios, to optimize the wireless channel utilization, and to prioritize the receivers.
Resumo:
The concept of "fair basing" is widely acknowledged as a difficult area of patent law. This article maps the development of fair basing law to demonstrate how some of the difficulties have arisen. Part I of the article traces the development of the branches of patent law that were swept under the nomenclature of "fair basing" by British legislation in 1949. It looks at the early courts' approach to patent construction, examines the early origin of fair basing and what it was intended to achiever. Part II of the article considers the modern interpretation of fair basing, which provides a striking contrast to its historical context. Without any consistent judicial approach to construction the doctrine has developed inappropriately, giving rise to both over-strict and over-generous approaches.
Resumo:
In an earlier article the concept of fair basing in Australian patent law was described as a "problem child", often unruly and unpredictable in practice, but nevertheless understandable and useful in policy terms. The article traced the development of several different branches of patent law that were swept under the nomenclature of "fair basing" in Britain in 1949. It then went on to examine the adoption of fair basis into Australian law, the modern interpretation of the requirement, and its problems. This article provides an update. After briefly recapping on the relevant historical issues, it examines the recent Lockwood "internal" fair basing case in the Federal and High Courts.
Resumo:
Currently the Bachelor of Design is the generic degree offered to the four disciplines of Architecture, Landscape Architecture, Industrial Design, and Interior Design within the School of Design at the Queensland University of Technology. Regardless of discipline, Digital Communication is a core unit taken by the 600 first year students entering the Bachelor of Design degree. Within the design disciplines the communication of the designer's intentions is achieved primarily through the use of graphic images, with written information being considered as supportive or secondary. As such, Digital Communication attempts to educate learners in the fundamentals of this graphic design communication, using a generic digital or software tool. Past iterations of the unit have not acknowledged the subtle difference in design communication of the different design disciplines involved, and has used a single generic software tool. Following a review of the unit in 2008, it was decided that a single generic software tool was no longer entirely sufficient. This decision was based on the recognition that there was an increasing emergence of discipline specific digital tools, and an expressed student desire and apparent aptitude to learn these discipline specific tools. As a result the unit was reconstructed in 2009 to offer both discipline specific and generic software instruction, if elected by the student. This paper, apart from offering the general context and pedagogy of the existing and restructured units, will more importantly offer research data that validates the changes made to the unit. Most significant of this new data is the results of surveys that authenticate actual student aptitude versus desire in learning discipline specific tools. This is done through an exposure of student self efficacy in problem resolution and technological prowess - generally and specifically within the unit. More traditional means of validation is also presented that includes the results of the generic university-wide Learning Experience Survey of the unit, as well as a comparison between the assessment results of the restructured unit versus the previous year.
Resumo:
Chlamydia pneumoniae is a common human and animal pathogen associated with a wide range of upper and lower respiratory tract infections. In more recent years there has been increasing evidence to suggest a link between C. pneumoniae and chronic diseases in humans, including atherosclerosis, stroke and Alzheimer’s disease. C. pneumoniae human strains show little genetic variation, indicating that the human-derived strain originated from a common ancestor in the recent past. Despite extensive information on the genetics and morphology processes of the human strain, knowledge concerning many other hosts (including marsupials, amphibians, reptiles and equines) remains virtually unexplored. The koala (Phascolarctos cinereus) is a native Australian marsupial under threat due to habitat loss, predation and disease. Koalas are very susceptible to chlamydial infections, most commonly affecting the conjunctiva, urogenital tract and/or respiratory tract. To address this gap in the literature, the present study (i) provides a detailed description of the morphologic and genomic architecture of the C. pneumoniae koala (and human) strain, and shows that the koala strain is microscopically, developmentally and genetically distinct from the C. pneumoniae human strain, and (ii) examines the genetic relationship of geographically diverse C. pneumoniae isolates from human, marsupial, amphibian, reptilian and equine hosts, and identifies two distinct lineages that have arisen from animal-to-human cross species transmissions. Chapter One of this thesis explores the scientific problem and aims of this study, while Chapter Two provides a detailed literature review of the background in this field of work. Chapter Three, the first results chapter, describes the morphology and developmental stages of C. pneumoniae koala isolate LPCoLN, as revealed by fluorescence and transmission electron microscopy. The profile of this isolate, when cultured in HEp-2 human epithelial cells, was quite different to the human AR39 isolate. Koala LPCoLN inclusions were larger; the elementary bodies did not have the characteristic pear-shaped appearance, and the developmental cycle was completed within a shorter period of time (as confirmed by quantitative real-time PCR). These in vitro findings might reflect biological differences between koala LPCoLN and human AR39 in vivo. Chapter Four describes the complete genome sequence of the koala respiratory pathogen, C. pneumoniae LPCoLN. This is the first animal isolate of C. pneumoniae to be fully-sequenced. The genome sequence provides new insights into genomic ‘plasticity’ (organisation), evolution and biology of koala LPCoLN, relative to four complete C. pneumoniae human genomes (AR39, CWL029, J138 and TW183). Koala LPCoLN contains a plasmid that is not shared with any of the human isolates, there is evidence of gene loss in nucleotide salvage pathways, and there are 10 hot spot genomic regions of variation that were previously not identified in the C. pneumoniae human genomes. Sequence (partial-length) from a second, independent, wild koala isolate (EBB) at several gene loci confirmed that the koala LPCoLN isolate was representative of a koala C. pneumoniae strain. The combined sequence data provides evidence that the C. pneumoniae animal (koala LPCoLN) genome is ancestral to the C. pneumoniae human genomes and that human infections may have originated from zoonotic infections. Chapter Five examines key genome components of the five C. pneumoniae genomes in more detail. This analysis reveals genomic features that are shared by and/or contribute to the broad ecological adaptability and evolution of C. pneumoniae. This analysis resulted in the identification of 65 gene sequences for further analysis of intraspecific variation, and revealed some interesting differences, including fragmentation, truncation and gene decay (loss of redundant ancestral traits). This study provides valuable insights into metabolic diversity, adaptation and evolution of C. pneumoniae. Chapter Six utilises a subset of 23 target genes identified from the previous genomic comparisons and makes a significant contribution to our understanding of genetic variability among C. pneumoniae human (11) and animal (6 amphibian, 5 reptilian, 1 equine and 7 marsupial hosts) isolates. It has been shown that the animal isolates are genetically diverse, unlike the human isolates that are virtually clonal. More convincing evidence that C. pneumoniae originated in animals and recently (in the last few hundred thousand years) crossed host species to infect humans is provided in this study. It is proposed that two animal-to-human cross species events have occurred in the context of the results, one evident by the nearly clonal human genotype circulating in the world today, and the other by a more animal-like genotype apparent in Indigenous Australians. Taken together, these data indicate that the C. pneumoniae koala LPCoLN isolate has morphologic and genomic characteristics that are distinct from the human isolates. These differences may affect the survival and activity of the C. pneumoniae koala pathogen in its natural host, in vivo. This study, by utilising the genetic diversity of C. pneumoniae, identified new genetic markers for distinguishing human and animal isolates. However, not all C. pneumoniae isolates were genetically diverse; in fact, several isolates were highly conserved, if not identical in sequence (i.e. Australian marsupials) emphasising that at some stage in the evolution of this pathogen, there has been an adaptation/s to a particular host, providing some stability in the genome. The outcomes of this study by experimental and bioinformatic approaches have significantly enhanced our knowledge of the biology of this pathogen and will advance opportunities for the investigation of novel vaccine targets, antimicrobial therapy, or blocking of pathogenic pathways.
Resumo:
Cloud computing is a latest new computing paradigm where applications, data and IT services are provided over the Internet. Cloud computing has become a main medium for Software as a Service (SaaS) providers to host their SaaS as it can provide the scalability a SaaS requires. The challenges in the composite SaaS placement process rely on several factors including the large size of the Cloud network, SaaS competing resource requirements, SaaS interactions between its components and SaaS interactions with its data components. However, existing applications’ placement methods in data centres are not concerned with the placement of the component’s data. In addition, a Cloud network is much larger than data center networks that have been discussed in existing studies. This paper proposes a penalty-based genetic algorithm (GA) to the composite SaaS placement problem in the Cloud. We believe this is the first attempt to the SaaS placement with its data in Cloud provider’s servers. Experimental results demonstrate the feasibility and the scalability of the GA.
Resumo:
Web service composition is an important problem in web service based systems. It is about how to build a new value-added web service using existing web services. A web service may have many implementations, all of which have the same functionality, but may have different QoS values. Thus, a significant research problem in web service composition is how to select a web service implementation for each of the web services such that the composite web service gives the best overall performance. This is so-called optimal web service selection problem. There may be mutual constraints between some web service implementations. Sometimes when an implementation is selected for one web service, a particular implementation for another web service must be selected. This is so called dependency constraint. Sometimes when an implementation for one web service is selected, a set of implementations for another web service must be excluded in the web service composition. This is so called conflict constraint. Thus, the optimal web service selection is a typical constrained ombinatorial optimization problem from the computational point of view. This paper proposes a new hybrid genetic algorithm for the optimal web service selection problem. The hybrid genetic algorithm has been implemented and evaluated. The evaluation results have shown that the hybrid genetic algorithm outperforms other two existing genetic algorithms when the number of web services and the number of constraints are large.
Resumo:
This paper demonstrates some interesting connections between the hitherto disparate fields of mobile robot navigation and image-based visual servoing. A planar formulation of the well-known image-based visual servoing method leads to a bearing-only navigation system that requires no explicit localization and directly yields desired velocity. The well known benefits of image-based visual servoing such as robustness apply also to the planar case. Simulation results are presented.
Resumo:
This paper demonstrates some interesting connections between the hitherto disparate fields of mobile robot navigation and image-based visual servoing. A planar formulation of the well-known image-based visual servoing method leads to a bearing-only navigation system that requires no explicit localization and directly yields desired velocity. The well known benefits of image-based visual servoing such as robustness apply also to the planar case. Simulation results are presented.
Resumo:
Objective: The Brief Michigan Alcoholism Screening Test (bMAST) is a 10-item test derived from the 25-item Michigan Alcoholism Screening Test (MAST). It is widely used in the assessment of alcohol dependence. In the absence of previous validation studies, the principal aim of this study was to assess the validity and reliability of the bMAST as a measure of the severity of problem drinking. Method: There were 6,594 patients (4,854 men, 1,740 women) who had been referred for alcohol-use disorders to a hospital alcohol and drug service who voluntarily participated in this study. Results: An exploratory factor analysis defined a two-factor solution, consisting of Perception of Current Drinking and Drinking Consequences factors. Structural equation modeling confirmed that the fit of a nine-item, two-factor model was superior to the original one-factor model. Concurrent validity was assessed through simultaneous administration of the Alcohol Use Disorders Identification Test (AUDIT) and associations with alcohol consumption and clinically assessed features of alcohol dependence. The two-factor bMAST model showed moderate correlations with the AUDIT. The two-factor bMAST and AUDIT were similarly associated with quantity of alcohol consumption and clinically assessed dependence severity features. No differences were observed between the existing weighted scoring system and the proposed simple scoring system. Conclusions: In this study, both the existing bMAST total score and the two-factor model identified were as effective as the AUDIT in assessing problem drinking severity. There are additional advantages of employing the two-factor bMAST in the assessment and treatment planning of patients seeking treatment for alcohol-use disorders. (J. Stud. Alcohol Drugs 68: 771-779,2007)
Resumo:
The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia.
Resumo:
The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia
Resumo:
Despite the general evolution and broadening of the scope of the concept of infrastructure in many other sectors, the energy sector has maintained the same narrow boundaries for over 80 years. Energy infrastructure is still generally restricted in meaning to the transmission and distribution networks of electricity and, to some extent, gas. This is especially true in the urban development context. This early 20th century system is struggling to meet community expectations that the industry itself created and fostered for many decades. The relentless growth in demand and changing political, economic and environmental challenges require a shift from the traditional ‘predict and provide’ approach to infrastructure which is no longer economically or environmentally viable. Market deregulation and a raft of demand and supply side management strategies have failed to curb society’s addiction to the commodity of electricity. None of these responses has addressed the fundamental problem. This chapter presents an argument for the need for a new paradigm. Going beyond peripheral energy efficiency measures and the substitution of fossil fuels with renewables, it outlines a new approach to the provision of energy services in the context of 21st century urban environments.