194 resultados para the fundamental supermode


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The revolution in legal research provides exciting challenges for those exploring and writing about the legal landscape. Cumbersome paper sources have largely been replaced by electronic files and a new range of skills and sources are required to successfully conduct legal research.--------- Researching and Writing in Law, 3rd Edition is an updated research guide, mapping the developments that have taken place and providing the keys to the fundamental electronic sources of legal research, especially those now available on the web, as well as exploring traditional doctrinal methodologies. Included in this edition are extensive checklists for locating and validating the law in Australia, England, Canada, the United States, New Zealand, India and the European Union.-------- This third edition includes expanded discussion of the process of formulating a research proposal, writing project abstracts and undertaking a literature review (Chapter 7). Research methodologies are also extensively examined, focusing on the process of doctrinal methodology as well as discussing other useful methodologies, such as Comparative Research and Content Analysis (Chapter 5). Further highlighted are issues surrounding research ethics, including plagiarism and originality, the importance of developing skills in critique, and the influence of current university research environments on postgraduate legal research.-------- Law students and members of the practising profession aiming to update their research, knowledge and skills will find Researching and Writing in Law, 3rd Edition invaluable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper introduces a novel strategy for the specification of airworthiness certification categories for civil unmanned aircraft systems (UAS). The risk-based approach acknowledges the fundamental differences between the risk paradigms of manned and unmanned aviation. The proposed airworthiness certification matrix provides a systematic and objective structure for regulating the airworthiness of a diverse range of UAS types and operations. An approach for specifying UAS type categories is then discussed. An example of the approach, which includes the novel application of data-clustering algorithms, is presented to illustrate the discussion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The literature on critical thinking in higher education is constructed around the fundamental assumption that, while regarded as essential, is neither clearly or commonly understood. There is elsewhere evidence that academics and students have differing perceptions of what happens in university classrooms, particularly in regard to higher order thinking. This paper reports on a small-scale investigation in a Faculty of Education at an Australian University into academic and student definitions and understandings of critical thinking. Our particular interest lay in the consistencies and disconnections assumed to exist between academic staff and students. The presumption might therefore be that staff and students perceive critical thinking in different ways and that this may limit its achievement as a critical graduate attribute. The key finding from this study, contrary to extant findings, is that academics and students did share substantively similar definitions and understandings of critical thinking.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We review and discuss the literature on small firm growth with an intention to provide a useful vantage point for new research studies regarding this important phenomenon. We first discuss conceptual and methodological issues that represent critical choices for those who research growth and which make it challenging to compare results from previous studies. The substantial review of past research is organized into four sections representing two smaller and two larger literatures. The first of the latter focuses on internal and external drivers of small firm growth. Here we find that much has been learnt and that many valuable generalizations can be made. However, we also conclude that more research of the same kind is unlikely to yield much. While interactive and non-linear effects may be worth pursuing it is unlikely that any new and important growth drivers or strong, linear main effects would be found. The second large literature deals with organizational life-cycles or stages of development. While deservedly criticized for unwarranted determinism and weak empirics this type of approach addresses problems of high practical and also theoretical relevance, and should not be shunned by researchers. We argue that with a change in the fundamental assumptions and improved empirical design, research on the organizational and managerial consequences of growth is an important line of inquiry. With this, we overlap with one of the smaller literatures, namely studies focusing on the effects of growth. We argue that studies too often assume that growth equals success. We advocate instead the use of growth as an intermediary variable that influences more fundamental goals in ways that should be carefully examined rather than assumed. The second small literature distinguishes between different modes or forms of growth, including, e.g., organic vs. acquisition-based growth, and international expansion. We note that modes of growth is an important topic that has been under studied in the growth literature, whereas in other branches of research aspects of it may have been studied intensely, but not primarily from a growth perspective. In the final section we elaborate on ways forward for research on small firm growth. We point at rich opportunities for researchers who look beyond drivers of growth, where growth is viewed as a homogenous phenomenon assumed to unambiguously reflect success, and instead focus on growth as a process and a multi-dimensional phenomenon, as well as on how growth relates to more fundamental outcomes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes a novel experiment in which two very different methods of underwater robot localization are compared. The first method is based on a geometric approach in which a mobile node moves within a field of static nodes, and all nodes are capable of estimating the range to their neighbours acoustically. The second method uses visual odometry, from stereo cameras, by integrating scaled optical flow. The fundamental algorithmic principles of each localization technique is described. We also present experimental results comparing acoustic localization with GPS for surface operation, and a comparison of acoustic and visual methods for underwater operation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite the general evolution and broadening of the scope of the concept of infrastructure in many other sectors, the energy sector has maintained the same narrow boundaries for over 80 years. Energy infrastructure is still generally restricted in meaning to the transmission and distribution networks of electricity and, to some extent, gas. This is especially true in the urban development context. This early 20th century system is struggling to meet community expectations that the industry itself created and fostered for many decades. The relentless growth in demand and changing political, economic and environmental challenges require a shift from the traditional ‘predict and provide’ approach to infrastructure which is no longer economically or environmentally viable. Market deregulation and a raft of demand and supply side management strategies have failed to curb society’s addiction to the commodity of electricity. None of these responses has addressed the fundamental problem. This chapter presents an argument for the need for a new paradigm. Going beyond peripheral energy efficiency measures and the substitution of fossil fuels with renewables, it outlines a new approach to the provision of energy services in the context of 21st century urban environments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose – In recent years, knowledge-based urban development (KBUD) has introduced as a new strategic development approach for the regeneration of industrial cities. It aims to create a knowledge city consists of planning strategies, IT networks and infrastructures that achieved through supporting the continuous creation, sharing, evaluation, renewal and update of knowledge. Improving urban amenities and ecosystem services by creating sustainable urban environment is one of the fundamental components for KBUD. In this context, environmental assessment plays an important role in adjusting urban environment and economic development towards a sustainable way. The purpose of this paper is to present the role of assessment tools for environmental decision making process of knowledge cities. Design/methodology/approach – The paper proposes a new assessment tool to figure a template of a decision support system which will enable to evaluate the possible environmental impacts in an existing and future urban context. The paper presents the methodology of the proposed model named ‘ASSURE’ which consists of four main phases. Originality/value –The proposed model provides a useful guidance to evaluate the urban development and its environmental impacts to achieve sustainable knowledge-based urban futures. Practical implications – The proposed model will be an innovative approach to provide the resilience and function of urban natural systems secure against the environmental changes while maintaining the economic development of cities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents the results of a structural equation model (SEM) for describing and quantifying the fundamental factors that affect contract disputes between owners and contractors in the construction industry. Through this example, the potential impact of SEM analysis in construction engineering and management research is illustrated. The purpose of the specific model developed in this research is to explain how and why contract related construction problems occur. This study builds upon earlier work, which developed a disputes potential index, and the likelihood of construction disputes was modeled using logistic regression. In this earlier study, questionnaires were completed on 159 construction projects, which measured both qualitative and quantitative aspects of contract disputes, management ability, financial planning, risk allocation, and project scope definition for both owners and contractors. The SEM approach offers several advantages over the previously employed logistic regression methodology. The final set of structural equations provides insight into the interaction of the variables that was not apparent in the original logistic regression modeling methodology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An Asset Management (AM) life-cycle constitutes a set of processes that align with the development, operation and maintenance of assets, in order to meet the desired requirements and objectives of the stake holders of the business. The scope of AM is often broad within an organization due to the interactions between its internal elements such as human resources, finance, technology, engineering operation, information technology and management, as well as external elements such as governance and environment. Due to the complexity of the AM processes, it has been proposed that in order to optimize asset management activities, process modelling initiatives should be adopted. Although organisations adopt AM principles and carry out AM initiatives, most do not document or model their AM processes, let alone enacting their processes (semi-) automatically using a computer-supported system. There is currently a lack of knowledge describing how to model AM processes through a methodical and suitable manner so that the processes are streamlines and optimized and are ready for deployment in a computerised way. This research aims to overcome this deficiency by developing an approach that will aid organisations in constructing AM process models quickly and systematically whilst using the most appropriate techniques, such as workflow technology. Currently, there is a wealth of information within the individual domains of AM and workflow. Both fields are gaining significant popularity in many industries thus fuelling the need for research in exploring the possible benefits of their cross-disciplinary applications. This research is thus inspired to investigate these two domains to exploit the application of workflow to modelling and execution of AM processes. Specifically, it will investigate appropriate methodologies in applying workflow techniques to AM frameworks. One of the benefits of applying workflow models to AM processes is to adapt and enable both ad-hoc and evolutionary changes over time. In addition, this can automate an AM process as well as to support the coordination and collaboration of people that are involved in carrying out the process. A workflow management system (WFMS) can be used to support the design and enactment (i.e. execution) of processes and cope with changes that occur to the process during the enactment. So far few literatures can be found in documenting a systematic approach to modelling the characteristics of AM processes. In order to obtain a workflow model for AM processes commonalities and differences between different AM processes need to be identified. This is the fundamental step in developing a conscientious workflow model for AM processes. Therefore, the first stage of this research focuses on identifying the characteristics of AM processes, especially AM decision making processes. The second stage is to review a number of contemporary workflow techniques and choose a suitable technique for application to AM decision making processes. The third stage is to develop an intermediate ameliorated AM decision process definition that improves the current process description and is ready for modelling using the workflow language selected in the previous stage. All these lead to the fourth stage where a workflow model for an AM decision making process is developed. The process model is then deployed (semi-) automatically in a state-of-the-art WFMS demonstrating the benefits of applying workflow technology to the domain of AM. Given that the information in the AM decision making process is captured at an abstract level within the scope of this work, the deployed process model can be used as an executable guideline for carrying out an AM decision process in practice. Moreover, it can be used as a vanilla system that, once being incorporated with rich information from a specific AM decision making process (e.g. in the case of a building construction or a power plant maintenance), is able to support the automation of such a process in a more elaborated way.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Literally, the word compliance suggests conformity in fulfilling official requirements. The thesis presents the results of the analysis and design of a class of protocols called compliant cryptologic protocols (CCP). The thesis presents a notion for compliance in cryptosystems that is conducive as a cryptologic goal. CCP are employed in security systems used by at least two mutually mistrusting sets of entities. The individuals in the sets of entities only trust the design of the security system and any trusted third party the security system may include. Such a security system can be thought of as a broker between the mistrusting sets of entities. In order to provide confidence in operation for the mistrusting sets of entities, CCP must provide compliance verification mechanisms. These mechanisms are employed either by all the entities or a set of authorised entities in the system to verify the compliance of the behaviour of various participating entities with the rules of the system. It is often stated that confidentiality, integrity and authentication are the primary interests of cryptology. It is evident from the literature that authentication mechanisms employ confidentiality and integrity services to achieve their goal. Therefore, the fundamental services that any cryptographic algorithm may provide are confidentiality and integrity only. Since controlling the behaviour of the entities is not a feasible cryptologic goal,the verification of the confidentiality of any data is a futile cryptologic exercise. For example, there exists no cryptologic mechanism that would prevent an entity from willingly or unwillingly exposing its private key corresponding to a certified public key. The confidentiality of the data can only be assumed. Therefore, any verification in cryptologic protocols must take the form of integrity verification mechanisms. Thus, compliance verification must take the form of integrity verification in cryptologic protocols. A definition of compliance that is conducive as a cryptologic goal is presented as a guarantee on the confidentiality and integrity services. The definitions are employed to provide a classification mechanism for various message formats in a cryptologic protocol. The classification assists in the characterisation of protocols, which assists in providing a focus for the goals of the research. The resulting concrete goal of the research is the study of those protocols that employ message formats to provide restricted confidentiality and universal integrity services to selected data. The thesis proposes an informal technique to understand, analyse and synthesise the integrity goals of a protocol system. The thesis contains a study of key recovery,electronic cash, peer-review, electronic auction, and electronic voting protocols. All these protocols contain message format that provide restricted confidentiality and universal integrity services to selected data. The study of key recovery systems aims to achieve robust key recovery relying only on the certification procedure and without the need for tamper-resistant system modules. The result of this study is a new technique for the design of key recovery systems called hybrid key escrow. The thesis identifies a class of compliant cryptologic protocols called secure selection protocols (SSP). The uniqueness of this class of protocols is the similarity in the goals of the member protocols, namely peer-review, electronic auction and electronic voting. The problem statement describing the goals of these protocols contain a tuple,(I, D), where I usually refers to an identity of a participant and D usually refers to the data selected by the participant. SSP are interested in providing confidentiality service to the tuple for hiding the relationship between I and D, and integrity service to the tuple after its formation to prevent the modification of the tuple. The thesis provides a schema to solve the instances of SSP by employing the electronic cash technology. The thesis makes a distinction between electronic cash technology and electronic payment technology. It will treat electronic cash technology to be a certification mechanism that allows the participants to obtain a certificate on their public key, without revealing the certificate or the public key to the certifier. The thesis abstracts the certificate and the public key as the data structure called anonymous token. It proposes design schemes for the peer-review, e-auction and e-voting protocols by employing the schema with the anonymous token abstraction. The thesis concludes by providing a variety of problem statements for future research that would further enrich the literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The numerical modelling of electromagnetic waves has been the focus of many research areas in the past. Some specific applications of electromagnetic wave scattering are in the fields of Microwave Heating and Radar Communication Systems. The equations that govern the fundamental behaviour of electromagnetic wave propagation in waveguides and cavities are the Maxwell's equations. In the literature, a number of methods have been employed to solve these equations. Of these methods, the classical Finite-Difference Time-Domain scheme, which uses a staggered time and space discretisation, is the most well known and widely used. However, it is complicated to implement this method on an irregular computational domain using an unstructured mesh. In this work, a coupled method is introduced for the solution of Maxwell's equations. It is proposed that the free-space component of the solution is computed in the time domain, whilst the load is resolved using the frequency dependent electric field Helmholtz equation. This methodology results in a timefrequency domain hybrid scheme. For the Helmholtz equation, boundary conditions are generated from the time dependent free-space solutions. The boundary information is mapped into the frequency domain using the Discrete Fourier Transform. The solution for the electric field components is obtained by solving a sparse-complex system of linear equations. The hybrid method has been tested for both waveguide and cavity configurations. Numerical tests performed on waveguides and cavities for inhomogeneous lossy materials highlight the accuracy and computational efficiency of the newly proposed hybrid computational electromagnetic strategy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The News of the Week article that reports on Senator Kay Bailey Hutchison (R-TX) questioning the need to fund social science research at the National Science Foundation is alarming and shortsighted ("Senate panel chair asks why NSF funds social sciences," 12 May, p. 829). Social science research is at the fundamental core of basic research and has much to contribute to the economic viability of the United States. Twenty years of direct and jointly funded social and ecosystem science research at Colorado State University's Natural Resource Ecology Laboratory has produced deep insights into environmental and societal impacts of political upheaval, land use, and climate change in parts of Africa, Asia, and the Americas. Beyond greatly advancing our understanding of the coupled human-environmental system, the partnership of social and ecosystem science has brought scientists and decision-makers together to begin to develop solutions to difficult problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The structure and thermal stability between typical China kaolinite and halloysite were analysed by X-ray diffraction (XRD), infrared spectroscopy, infrared emission spectroscopy (IES) and Raman spectroscopy. Infrared emission spectroscopy over the temperature range of 300 to 700 °C has been used to characterise the thermal decomposition of both kaolinite and halloysite. Halloysite is characterised by two bands in the water bending region at 1629 and 1648 cm-1, attributed to structure water and coordinated water in the interlayer. Well defined hydroxyl stretching bands at around 3695, 3679, 3652 and 3625 cm-1 are observed for both kaolinite and halloysite. In the 550 °C infrared emission spectrum of halloysite is similar to that of kaolinite in 650-1350 cm-1 region. The infrared emission spectra of halloysite were found to be considerably different to that of kaolinite at lower temperatures. This difference is attributed to the fundamental difference in the structure of the two minerals.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Current guidelines on clear zone selection and roadside hazard management adopt the US approach based on the likelihood of roadside encroachment by drivers. This approach is based on the available research conducted in the 1960s and 70s. Over time, questions have been raised regarding the robustness and applicability of this research in Australasia in 2010 and in the Safe System context. This paper presents a review of the fundamental research relating to selection of clear zones. Results of extensive rural highway statistical data modelling suggest that a significant proportion of run-off-road to the left casualty crashes occurs in clear zones exceeding 13 m. They also show that the risk of run-off-road to the left casualty crashes was 21% lower where clear zones exceeded 8 m when compared with clear zones in the 4 – 8 m range. The paper discusses a possible approach to selection of clear zones based on managing crash outcomes, rather than on the likelihood of roadside encroachment which is the basis for the current practice. It is expected that this approach would encourage selection of clear zones wider than 8 m when the combination of other road features suggests higher than average casualty crash risk.