16 resultados para ELTIT, DIAMELA, 1949-
em Queensland University of Technology - ePrints Archive
Resumo:
The concept of "fair basing" is widely acknowledged as a difficult area of patent law. This article maps the development of fair basing law to demonstrate how some of the difficulties have arisen. Part I of the article traces the development of the branches of patent law that were swept under the nomenclature of "fair basing" by British legislation in 1949. It looks at the early courts' approach to patent construction, examines the early origin of fair basing and what it was intended to achiever. Part II of the article considers the modern interpretation of fair basing, which provides a striking contrast to its historical context. Without any consistent judicial approach to construction the doctrine has developed inappropriately, giving rise to both over-strict and over-generous approaches.
Resumo:
In an earlier article the concept of fair basing in Australian patent law was described as a "problem child", often unruly and unpredictable in practice, but nevertheless understandable and useful in policy terms. The article traced the development of several different branches of patent law that were swept under the nomenclature of "fair basing" in Britain in 1949. It then went on to examine the adoption of fair basis into Australian law, the modern interpretation of the requirement, and its problems. This article provides an update. After briefly recapping on the relevant historical issues, it examines the recent Lockwood "internal" fair basing case in the Federal and High Courts.
Resumo:
Flinders University and Queensland University of Technology, biofuels research interests cover a broad range of activities. Both institutions are seeking to overcome the twin evils of "peak oil" (Hubbert 1949 & 1956) and "global warming" (IPPC 2007, Stern 2006, Alison 2010), through development of Generation 1, 2 and 3 (Gen-1, 2 & 3) biofuels (Clarke 2008, Clarke 2010). This includes development of parallel Chemical Biorefinery, value-added, co-product chemical technologies, which can underpin the commercial viability of the biofuel industry. Whilst there is a focused effort to develop Gen-2 & 3 biofuels, thus avoiding the socially unacceptable use of food based Gen-1 biofuels, it must also be recognized that as yet, no country in the world has produced sustainable Gen-2 & 3 biofuel on a commercial basis. For example, in 2008 the United States used 38 billion litres (3.5% of total fuel use) of Gen-1 biofuel; in 2009/2010 this will be 47.5 billion litres (4.5% of fuel use) and in 2018 this has been estimated to rise to 96 billion litres (9% of total US fuel use). Brazil in 2008 produced 24.5 billion litres of ethanol, representing 37.3% of the world’s ethanol use for fuel and Europe, in 2008, produced 11.7 billion litres of biofuel (primarily as biodiesel). Compare this to Australia’s miserly biofuel production in 2008/2009 of 180 million litres of ethanol and 75 million litres of biodiesel, which is 0.4% of our fuel consumption! (Clarke, Graiver and Habibie 2010) To assist in the development of better biofuels technologies in the Asian developing regions the Australian Government recently awarded the Materials & BioEnergy Group from Flinders University, in partnership with the Queensland University of Technology, an Australian Leadership Award (ALA) Biofuel Fellowship program to train scientists from Indonesia and India about all facets of advanced biofuel technology.
Resumo:
Information and communication technologies (ICTs) are essential components of the knowledge economy, and have an immense complementary role in innovation, education, knowledge creation, and relations with government, civil society, and business within city regions. The ability to create, distribute, and exploit knowledge has become a major source of competitive advantage, wealth creation, and improvements in the new regional policies. Growing impact of ICTs on the economy and society, rapid application of recent scientific advances in new products and processes, shifting to more knowledge-intensive industry and services, and rising skill requirements have become crucial concepts for urban and regional competitiveness. Therefore, harnessing ICTs for knowledge-based urban development (KBUD) has a significant impact on urban and regional growth (Yigitcanlar, 2005). In this sense, e-region is a novel concept utilizing ICTs for regional development. Since the Helsinki European Council announced Turkey as a candidate for European Union (EU) membership in 1999, the candidacy has accelerated the speed of regional policy enhancements and adoption of the European regional policy standards. These enhancements and adoption include the generation of a new regional spatial division, NUTS-II statistical regions; a new legislation on the establishment of regional development agencies (RDAs); and new orientations in the field of high education, science, and technology within the framework of the EU’s Lisbon Strategy and the Bologna Process. The European standards posed an ambitious new agenda in the development and application of contemporary regional policy in Turkey (Bilen, 2005). In this sense, novel regional policies in Turkey necessarily endeavor to include information society objectives through efficient use of new technologies such as ICTs. Such a development seeks to be based on tangible assets of the region (Friedmann, 2006) as well as the best practices deriving from grounding initiatives on urban and local levels. These assets provide the foundation of an e-region that harnesses regional development in an information society context. With successful implementations, the Marmara region’s local governments in Turkey are setting the benchmark for the country in the implementation of spatial information systems and e-governance, and moving toward an e-region. Therefore, this article aims to shed light on organizational and regional realities of recent practices of ICT applications and their supply instruments based on evidence from selected local government organizations in the Marmara region. This article also exemplifies challenges and opportunities of the region in moving toward an e-region and provides a concise review of different ICT applications and strategies in a broader urban and regional context. The article is organized in three parts. The following section scrutinizes the e-region framework and the role of ICTs in regional development. Then, Marmara’s opportunities and challenges in moving toward an e-region are discussed in the context of ICT applications and their supply instruments based on public-sector projects, policies, and initiatives. Subsequently, the last section discusses conclusions and prospective research.
Resumo:
In this paper we pursue the task of aligning an ensemble of images in an unsupervised manner. This task has been commonly referred to as “congealing” in literature. A form of congealing, using a least-squares criteria, has been recently demonstrated to have desirable properties over conventional congealing. Least-squares congealing can be viewed as an extension of the Lucas & Kanade (LK)image alignment algorithm. It is well understood that the alignment performance for the LK algorithm, when aligning a single image with another, is theoretically and empirically equivalent for additive and compositional warps. In this paper we: (i) demonstrate that this equivalence does not hold for the extended case of congealing, (ii) characterize the inherent drawbacks associated with least-squares congealing when dealing with large numbers of images, and (iii) propose a novel method for circumventing these limitations through the application of an inverse-compositional strategy that maintains the attractive properties of the original method while being able to handle very large numbers of images.
Resumo:
Entrepreneurship research and practice places emphasis on company growth as a measure of entrepreneurial success. In many cases, there has been a tendency to give growth a very central role, with some researchers even seeing growth as the very essence of entrepreneurship (Cole, 1949; Sexton, 1997; Stevenson & Gumpert, 1991). A large number of empirical studies of the performance of young and/or small firms use growth as the dependent variable (see reviews by Ardishvili, Cardozo, Harmon, & Vadakath, 1998; Delmar, 1997; Wiklund, 1998). By contrast, the two most prominent views of strategic management – strategic positioning (Porter, 1980) and the resource-based view (Barney, 1991; Wernerfelt, 1984) – are both concerned with achieving competitive advantage and regard achieving economic rents and profitability relative to other competitors as the central measures of firm performance. Strategic entrepreneurship integrates these two perspectives and is simultaneously concerned with opportunity seeking and advantage seeking (Hitt, Ireland, Camp, & Sexton, 2002; Ireland, Hitt, & Sirmon, 2003). Consequently, both company growth and relative profitability are together relevant measures of firm performance in the domain of strategic entrepreneurship.
Resumo:
Lesson studies are a powerful form of professional development (Doig and Groves, 2011). The processes of creating, enacting, analyzing, and refining lessons to improve teaching practices are key components of lesson studies. Lesson studies have been the primary form of professional development in Japanese classrooms for many years (Lewis, Perry and Hurd, 2009). This model is now used to improve instruction in many South-East Asian countries (White and Lim, 2008), as well as increasingly in North America (Lesson Study Research Group, 2004), and South Africa (Ono and Ferreira, 2010). In China, this form of professional development aimed at improving teaching, has also been adopted, originating from Soviet models of teacher professional development arising from influences post 1949 (China Education Yearbook, 1986). Thus, China too has a long history of improving teaching and learning through this form of school-based professional learning.
Resumo:
The occurrence of extreme water level events along low-lying, highly populated and/or developed coastlines can lead to devastating impacts on coastal infrastructure. Therefore it is very important that the probabilities of extreme water levels are accurately evaluated to inform flood and coastal management and for future planning. The aim of this study was to provide estimates of present day extreme total water level exceedance probabilities around the whole coastline of Australia, arising from combinations of mean sea level, astronomical tide and storm surges generated by both extra-tropical and tropical storms, but exclusive of surface gravity waves. The study has been undertaken in two main stages. In the first stage, a high-resolution (~10 km along the coast) hydrodynamic depth averaged model has been configured for the whole coastline of Australia using the Danish Hydraulics Institute’s Mike21 modelling suite of tools. The model has been forced with astronomical tidal levels, derived from the TPX07.2 global tidal model, and meteorological fields, from the US National Center for Environmental Prediction’s global reanalysis, to generate a 61-year (1949 to 2009) hindcast of water levels. This model output has been validated against measurements from 30 tide gauge sites around Australia with long records. At each of the model grid points located around the coast, time series of annual maxima and the several highest water levels for each year were derived from the multi-decadal water level hindcast and have been fitted to extreme value distributions to estimate exceedance probabilities. Stage 1 provided a reliable estimate of the present day total water level exceedance probabilities around southern Australia, which is mainly impacted by extra-tropical storms. However, as the meteorological fields used to force the hydrodynamic model only weakly include the effects of tropical cyclones the resultant water levels exceedance probabilities were underestimated around western, northern and north-eastern Australia at higher return periods. Even if the resolution of the meteorological forcing was adequate to represent tropical cyclone-induced surges, multi-decadal periods yielded insufficient instances of tropical cyclones to enable the use of traditional extreme value extrapolation techniques. Therefore, in the second stage of the study, a statistical model of tropical cyclone tracks and central pressures was developed using histroic observations. This model was then used to generate synthetic events that represented 10,000 years of cyclone activity for the Australia region, with characteristics based on the observed tropical cyclones over the last ~40 years. Wind and pressure fields, derived from these synthetic events using analytical profile models, were used to drive the hydrodynamic model to predict the associated storm surge response. A random time period was chosen, during the tropical cyclone season, and astronomical tidal forcing for this period was included to account for non-linear interactions between the tidal and surge components. For each model grid point around the coast, annual maximum total levels for these synthetic events were calculated and these were used to estimate exceedance probabilities. The exceedance probabilities from stages 1 and 2 were then combined to provide a single estimate of present day extreme water level probabilities around the whole coastline of Australia.
Resumo:
The potential impacts of extreme water level events on our coasts are increasing as populations grow and sea levels rise. To better prepare for the future, coastal engineers and managers need accurate estimates of average exceedance probabilities for extreme water levels. In this paper, we estimate present day probabilities of extreme water levels around the entire coastline of Australia. Tides and storm surges generated by extra-tropical storms were included by creating a 61-year (1949-2009) hindcast of water levels using a high resolution depth averaged hydrodynamic model driven with meteorological data from a global reanalysis. Tropical cyclone-induced surges were included through numerical modelling of a database of synthetic tropical cyclones equivalent to 10,000 years of cyclone activity around Australia. Predicted water level data was analysed using extreme value theory to construct return period curves for both the water level hindcast and synthetic tropical cyclone modelling. These return period curves were then combined by taking the highest water level at each return period.
Resumo:
Evidence increasingly suggests that our behaviour on the road mirrors our behaviour across other aspects of our life. The idea that we drive as we live, described by Tillman and Hobbs more than 65 years ago when examining off-road behaviours of taxi drivers (1949), is the focus of the current paper. As part of a larger study examining the impact of penalty changes on a large cohort of Queensland speeding offenders, criminal (lifetime) and crash history (10 year period) data for a sub-sample of 1000 offenders were obtained. Based on the ‘drive as we live’ maxim, it was hypothesised that crash-involved speeding offenders would be more likely to have a criminal history than non-crash involved offenders. Overall, only 30% of speeding offenders had a criminal history. However, crash-involved offenders were significantly more likely to have a criminal history (49.4%) than non-crash involved offenders (28.6%), supporting the hypothesis. Furthermore, those deemed ‘most at fault’ in a crash were the group most likely to have at least one criminal offence (52.2%). When compared to the non-crash involved offenders, those deemed ‘not most at fault’ in a crash were also more likely to have had at least one criminal offence (46.5%). Therefore, when compared to non-crash involved speeding offenders, those offenders involved in a crash were more likely to have been convicted of at least one criminal offence, irrespective of whether they were deemed ‘most at fault’ in that crash. Implications for traffic offender management and policing are discussed.
Resumo:
This chapter challenges current approaches to defining the context and process of entrepreneurship education. In modeling our classrooms as a microcosm of the world our current and future students will enter, this chapter brings to life (and celebrates) the everpresent diversity found within. The chapter attempts to make an important (and unique) contribution to the field of enterprise education by illustrating how we can determine the success of (1) our efforts as educators, (2) our students, and (3) our various teaching methods. The chapter is based on two specific premises, the most fundamental being the assertion that the performance of student, educator and institution can only be accounted for by accepting the nature of the dialogic relationship between the student and educator and between the educator and institution. A second premise is that at any moment in time, the educator can be assessed as being either efficient or inefficient, due to the presence of observable heterogeneity in the learning environment that produces differential learning outcomes. This chapter claims that understanding and appreciating the nature of heterogeneity in our classrooms provides an avenue for improvement in all facets of learning and teaching. To explain this claim, Haskell’s (1949) theory of coaction is resurrected to provide a lens through which all manner of interaction occurring within all forms of educational contexts can be explained. Haskell (1949) asserted that coaction theory had three salient features.
Resumo:
This paper examines the asymmetry of changes in CO
Resumo:
In the first half of the twentieth century the dematerializing of boundaries between enclosure and exposure problematized traditional acts of “occupation” and understandings of the domestic environment. As a space of escalating technological control, the modern domestic interior offered new potential to re-define the meaning and means of habitation. This shift is clearly expressed in the transformation of electric lighting technology and applications for the modern interior in the mid-twentieth century. Addressing these issues, this paper examines the critical role of electric lighting in regulating and framing both the public and private occupation of Philip Johnson’s New Canaan estate. Exploring the dialectically paired transparent Glass House and opaque Guest House (both 1949), this study illustrates how Johnson employed artificial light to control the visual environment of the estate as well as to aestheticize the performance of domestic space. Looking closely at the use of artificial light to create emotive effects as well as to intensify the experience of occupation, this revisiting of the iconic Glass House and lesser-known Guest House provides a more complex understanding of Johnson’s work and the means with which he inhabited his own architecture. Calling attention to the importance of Johnson serving as both architect and client, and his particular interest in exploring the new potential of architectural lighting in this period, this paper investigates Johnson’s use of electric light to support architectural narratives, maintain visual order and control, and to suit the nuanced desires of domestic occupation.