48 resultados para THEORETICAL BASIS
Resumo:
Ambiguity validation as an important procedure of integer ambiguity resolution is to test the correctness of the fixed integer ambiguity of phase measurements before being used for positioning computation. Most existing investigations on ambiguity validation focus on test statistic. How to determine the threshold more reasonably is less understood, although it is one of the most important topics in ambiguity validation. Currently, there are two threshold determination methods in the ambiguity validation procedure: the empirical approach and the fixed failure rate (FF-) approach. The empirical approach is simple but lacks of theoretical basis. The fixed failure rate approach has a rigorous probability theory basis, but it employs a more complicated procedure. This paper focuses on how to determine the threshold easily and reasonably. Both FF-ratio test and FF-difference test are investigated in this research and the extensive simulation results show that the FF-difference test can achieve comparable or even better performance than the well-known FF-ratio test. Another benefit of adopting the FF-difference test is that its threshold can be expressed as a function of integer least-squares (ILS) success rate with specified failure rate tolerance. Thus, a new threshold determination method named threshold function for the FF-difference test is proposed. The threshold function method preserves the fixed failure rate characteristic and is also easy-to-apply. The performance of the threshold function is validated with simulated data. The validation results show that with the threshold function method, the impact of the modelling error on the failure rate is less than 0.08%. Overall, the threshold function for the FF-difference test is a very promising threshold validation method and it makes the FF-approach applicable for the real-time GNSS positioning applications.
Resumo:
Purpose This paper aims to set out a new hierarchical and differentiated model of social marketing principles, concepts and techniques that builds on, but supersedes, the existing lists of non-equivalent and undifferentiated benchmark criteria. Design/methodology/approach This is a conceptual paper that proposes a hierarchical model of social marketing principles, concepts and techniques. Findings This new delineation of the social marketing principle, its four core concepts and five techniques, represents a new way to conceptualize and recognize the different elements that constitute social marketing. This new model will help add to and further the development of the theoretical basis of social marketing, building on the definitional work led by the International Social Marketing Association (iSMA), Australian Association of Social Marketing (AASM) and European Social Marketing Association (ESMA). Research limitations/implications This proposed model offers a foundation for future research to expand upon. Further research is recommended to empirically test the proposed model. Originality/value This paper seeks to advance the theoretical base of social marketing by making a reasoned case for the need to differentiate between principles, concepts and techniques when seeking to describe social marketing.
Resumo:
Young people in detention are at greater risk of death and disability from injury sustained while not in custody. Injury prevention and mental health programs have been designed for this group but their theoretical basis is rarely discussed. The present study investigates whether the conceptual basis of the Theory of Planned Behavior (TPB) is relevant to youth in a detention center. Focus group and observational data were collected. A thematic analysis supported central theoretical constructs and emphasized “Subjective Norms.” The challenge of normative influences must be actively addressed in the design of health interventions for youth in detention.
Resumo:
This project analyses and evaluates the integrity assurance mechanisms used in four Authenticated Encryption schemes based on symmetric block ciphers. These schemes are all cross chaining block cipher modes that claim to provide both confidentiality and integrity assurance simultaneously, in one pass over the data. The investigations include assessing the validity of an existing forgery attack on certain schemes, applying the attack approach to other schemes and implementing the attacks to verify claimed probabilities of successful forgeries. For these schemes, the theoretical basis of the attack was developed, the attack algorithm implemented and computer simulations performed for experimental verification.
Resumo:
A theoretical basis is required for comparing key features and critical elements in wild fisheries and aquaculture supply chains under a changing climate. Here we develop a new quantitative metric that is analogous to indices used to analyse food-webs and identify key species. The Supply Chain Index (SCI) identifies critical elements as those elements with large throughput rates, as well as greater connectivity. The sum of the scores for a supply chain provides a single metric that roughly captures both the resilience and connectedness of a supply chain. Standardised scores can facilitate cross-comparisons both under current conditions as well as under a changing climate. Identification of key elements along the supply chain may assist in informing adaptation strategies to reduce anticipated future risks posed by climate change. The SCI also provides information on the relative stability of different supply chains based on whether there is a fairly even spread in the individual scores of the top few key elements, compared with a more critical dependence on a few key individual supply chain elements. We use as a case study the Australian southern rock lobster Jasus edwardsii fishery, which is challenged by a number of climate change drivers such as impacts on recruitment and growth due to changes in large-scale and local oceanographic features. The SCI identifies airports, processors and Chinese consumers as the key elements in the lobster supply chain that merit attention to enhance stability and potentially enable growth. We also apply the index to an additional four real-world Australian commercial fishery and two aquaculture industry supply chains to highlight the utility of a systematic method for describing supply chains. Overall, our simple methodological approach to empirically-based supply chain research provides an objective method for comparing the resilience of supply chains and highlighting components that may be critical.
Resumo:
NK model, proposed by Kauffman (1993), is a strong simulation framework to study competing dynamics. It has been applied in some social science fields, for instance, organization science. However, like many other simulation methods, NK model has not received much attention from Management Information Systems (MIS) discipline. This tutorial, thus, is trying to introduce NK model in a simple way and encourage related studies. To demonstrate how NK model works, this tutorial reproduces several Levinthal’s (1997) experiments. Besides, this tutorial attempts to make clear the relevance between NK model and agent-based modeling (ABM). The relevance can be a theoretical basis to further develop NK model framework for other research scenarios. For example, this tutorial provides an NK model solution to study IT value cocreation process by extending network structure and agent interactions.
Resumo:
As an emerging research method that has showed promising potential in several research disciplines, simulation received relatively few attention in information systems research. This paper illustrates a framework for employing simulation to study IT value cocreation. Although previous studies identified factors driving IT value cocreation, its underlying process remains unclear. Simulation can address this limitation through exploring such underlying process with computational experiments. The simulation framework in this paper is based on an extended NK model. Agent-based modeling is employed as the theoretical basis for the NK model extensions.
Resumo:
Background: In order to design appropriate environments for performance and learning of movement skills, physical educators need a sound theoretical model of the learner and of processes of learning. In physical education, this type of modelling informs the organization of learning environments and effective and efficient use of practice time. An emerging theoretical framework in motor learning, relevant to physical education, advocates a constraints-led perspective for acquisition of movement skills and game play knowledge. This framework shows how physical educators could use task, performer and environmental constraints to channel acquisition of movement skills and decision making behaviours in learners. From this viewpoint, learners generate specific movement solutions to satisfy the unique combination of constraints imposed on them, a process which can be harnessed during physical education lessons. Purpose: In this paper the aim is to provide an overview of the motor learning approach emanating from the constraints-led perspective, and examine how it can substantiate a platform for a new pedagogical framework in physical education: nonlinear pedagogy. We aim to demonstrate that it is only through theoretically valid and objective empirical work of an applied nature that a conceptually sound nonlinear pedagogy model can continue to evolve and support research in physical education. We present some important implications for designing practices in games lessons, showing how a constraints-led perspective on motor learning could assist physical educators in understanding how to structure learning experiences for learners at different stages, with specific focus on understanding the design of games teaching programmes in physical education, using exemplars from Rugby Union and Cricket. Findings: Research evidence from recent studies examining movement models demonstrates that physical education teachers need a strong understanding of sport performance so that task constraints can be manipulated so that information-movement couplings are maintained in a learning environment that is representative of real performance situations. Physical educators should also understand that movement variability may not necessarily be detrimental to learning and could be an important phenomenon prior to the acquisition of a stable and functional movement pattern. We highlight how the nonlinear pedagogical approach is student-centred and empowers individuals to become active learners via a more hands-off approach to learning. Summary: A constraints-based perspective has the potential to provide physical educators with a framework for understanding how performer, task and environmental constraints shape each individual‟s physical education. Understanding the underlying neurobiological processes present in a constraints-led perspective to skill acquisition and game play can raise awareness of physical educators that teaching is a dynamic 'art' interwoven with the 'science' of motor learning theories.
Resumo:
Daylighting in tropical and sub-tropical climates presents a unique challenge that is generally not well understood by designers. In a sub-tropical region such as Brisbane, Australia the majority of the year comprises of sunny clear skies with few overcast days and as a consequence windows can easily become sources of overheating and glare. The main strategy in dealing with this issue is extensive shading on windows. However, this in turn prevents daylight penetration into buildings often causing an interior to appear gloomy and dark even though there is more than sufficient daylight available. As a result electric lighting is the main source of light, even during the day. Innovative daylight devices which redirect light from windows offer a potential solution to this issue. These devices can potentially improve daylighting in buildings by increasing the illumination within the environment decreasing the high contrast between the window and work regions and deflecting potentially glare causing sunlight away from the observer. However, the performance of such innovative daylighting devices are generally quantified under overcast skies (i.e. daylight factors) or skies without sun, which are typical of European climates and are misleading when considering these devices for tropical or sub-tropical climates. This study sought to compare four innovative window daylighting devices in RADIANCE; light shelves, laser cut panels, micro-light guides and light redirecting blinds. These devices were simulated in RADIANCE under sub-tropical skies (for Brisbane) within the test case of a typical CBD office space. For each device the quantity of light redirected and its distribution within the space was used as the basis for comparison. In addition, glare analysis on each device was conducted using Weinold and Christoffersons evalglare. The analysis was conducted for selected hours for a day in each season. The majority of buildings that humans will occupy in their lifetime are already constructed, and extensive remodelling of most of these buildings is unlikely. Therefore the most effective way to improve daylighting in the near future will be through the alteration existing window spaces. Thus it will be important to understand the performance of daylighting systems with respect to the climate it is to be used in. This type of analysis is important to determine the applicability of a daylighting strategy so that designers can achieve energy efficiency as well the health benefits of natural daylight.
Resumo:
A key issue in the economic development and performance of organizations is the existence of standards. Their definition and control are sources of power and it is important to understand their concept, as it gives standards their direction and their legitimacy, and to explore how they are represented and applied. The difficulties posed by classical micro-economics in establishing a theory of standardization that is compatible with its fundamental axiomatic are acknowledged. We propose to reconsider the problem by taking the opposite perspective in questioning its theoretical base and by reformulating assumptions about the independent and autonomous decisions taken by actors. The Theory of Conventions will offer us a theoretical framework and tools enabling us to understand the systemic dimension and dynamic structure of standards. These will be seen as a special case of conventions. This work aims to provide a sound basis and promote a better consciousness in the development of global project management standards. It aims also to emphasize that social construction is not a matter of copyright but a matter of open minds, collective cognitive process and freedom for the common wealth.
Resumo:
A key issue for the economic development and for performance of organizations is the existence of standards. As their definitions and control are source of power, it seems to be important to understand the concept and to wonder about the representations authorized by the concept which give their direction and their legitimacy. The difficulties of classical microeconomics of establishing a theory of standardisation compatible with its fundamental axiomatic are underlined. We propose to reconsider the problem by carrying out the opposite way: to question the theoretical base, by reformulating assumptions on the autonomy of the choice of the actors. The theory of conventions will offer us both a theoretical framework and tools, enabling us to understand the systemic dimension and dynamic structure of standards seen as special case of conventions. This work aims thus to provide a sound basis and promote a better consciousness in the development of global project management standards, aiming also to underline that social construction is not a matter of copyright but a matter of open minds, collective cognitive process and freedom for the common wealth.
Resumo:
Electrostatic discharges have been identified as the most likely cause in a number of incidents of fire and explosion with unexplained ignitions. The lack of data and suitable models for this ignition mechanism creates a void in the analysis to quantify the importance of static electricity as a credible ignition mechanism. Quantifiable hazard analysis of the risk of ignition by static discharge cannot, therefore, be entirely carried out with our current understanding of this phenomenon. The study of electrostatics has been ongoing for a long time. However, it was not until the wide spread use of electronics that research was developed for the protection of electronics from electrostatic discharges. Current experimental models for electrostatic discharge developed for intrinsic safety with electronics are inadequate for ignition analysis and typically are not supported by theoretical analysis. A preliminary simulation and experiment with low voltage was designed to investigate the characteristics of energy dissipation and provided a basis for a high voltage investigation. It was seen that for a low voltage the discharge energy represents about 10% of the initial capacitive energy available and that the energy dissipation was within 10 ns of the initial discharge. The potential difference is greatest at the initial break down when the largest amount of the energy is dissipated. The discharge pathway is then established and minimal energy is dissipated as energy dissipation becomes greatly influenced by other components and stray resistance in the discharge circuit. From the initial low voltage simulation work, the importance of the energy dissipation and the characteristic of the discharge were determined. After the preliminary low voltage work was completed, a high voltage discharge experiment was designed and fabricated. Voltage and current measurement were recorded on the discharge circuit allowing the discharge characteristic to be recorded and energy dissipation in the discharge circuit calculated. Discharge energy calculations show consistency with the low voltage work relating to discharge energy with about 30-40% of the total initial capacitive energy being discharged in the resulting high voltage arc. After the system was characterised and operation validated, high voltage ignition energy measurements were conducted on a solution of n-Pentane evaporating in a 250 cm3 chamber. A series of ignition experiments were conducted to determine the minimum ignition energy of n-Pentane. The data from the ignition work was analysed with standard statistical regression methods for tests that return binary (yes/no) data and found to be in agreement with recent publications. The research demonstrates that energy dissipation is heavily dependent on the circuit configuration and most especially by the discharge circuit's capacitance and resistance. The analysis established a discharge profile for the discharges studied and validates the application of this methodology for further research into different materials and atmospheres; by systematically looking at discharge profiles of test materials with various parameters (e.g., capacitance, inductance, and resistance). Systematic experiments looking at the discharge characteristics of the spark will also help understand the way energy is dissipated in an electrostatic discharge enabling a better understanding of the ignition characteristics of materials in terms of energy and the dissipation of that energy in an electrostatic discharge.
Resumo:
This project was a step forward in developing intrusion detection systems in distributed environments such as web services. It investigates a new approach of detection based on so-called "taint-marking" techniques and introduces a theoretical framework along with its implementation in the Linux kernel.
Resumo:
Purpose This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs, and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 mm to 100 mm, using a nominal photon energy of 6 MV. Results According to the practical definition established in this project, field sizes < 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0 % to 2.0 %, or field size uncertainties are 0.5 mm, field sizes < 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes < 12 mm. Source occlusion also caused a large change in OPF for field sizes < 8 mm. Based on the results of this study, field sizes < 12 mm were considered to be theoretically very small for 6 MV beams. Conclusions Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least < 12 mm and more conservatively < 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.
Resumo:
Charge reversal (CR) and neutralization reionization (NR) experiments carried out on a 4-sector mass spectrometer demonstrate that isotopically labeled, linear C-4 anion rearranges upon collisional oxidation. The cations and neutrals formed in these experiments exhibit differing degrees of isotopic scrambling in their fragmentation patterns, indicative of (at least) partial isomerization of both states. Theoretical studies, employing the CCSD(T)/aug-cc-pVDZ//B3LYP/6-31G(d) level of theory, favor conversion to the rhombic C-4 isomer on both cationic and neutral potential-energy surfaces with the rhombic structures predicted to be slightly more stable than the linear forms in each case. The combination of experiment with theory indicates that the elusive rhombic C-4 is formed as a cation and as a neutral following charge stripping of linear C-4(-)