625 resultados para Standard models
Resumo:
Business models to date have remained the creation of management, however, it is the belief of the authors that designers should be critically approaching, challenging and creating new business models as part of their practice. This belief portrays a new era where business model constructs become the new design brief of the future and fuel design and innovation to work together at the strategic level of an organisation. Innovation can no longer rely on technology and R&D alone but must incorporate business models. Business model innovation has become a strong type of competitive advantage. As firms choose not to compete only on price, but through the delivery of a unique value proposition in order to engage with customers and to differentiate a company within a competitive market. The purpose of this paper is to explore and investigate business model design through various product and/or service deliveries, and identify common drivers that are catalysts for business model innovation. Fifty companies spanning a diverse range of criteria were chosen, to evaluate and compare commonalities and differences in the design of their business models. The analysis of these business cases uncovered commonalities of the key strategic drivers behind these innovative business models. Five Meta Models were derived from this content analysis: Customer Led, Cost Driven, Resource Led, Partnership Led and Price Led. These five key foci provide a designer with a focus from which quick prototypes of new business models are created. Implications from this research suggest there is no ‘one right’ model, but rather through experimentation, the generation of many unique and diverse concepts can result in greater possibilities for future innovation and sustained competitive advantage.
Resumo:
Passenger flow studies in airport terminals have shown consistent statistical relationships between airport spatial layout and pedestrian movement, facilitating prediction of movement from terminal designs. However, these studies are done at an aggregate level and do not incorporate how individual passengers make decisions at a microscopic level. Therefore, they do not explain the formation of complex movement flows. In addition, existing models mostly focus on standard airport processing procedures such as immigration and security, but seldom consider discretionary activities of passengers, and thus are not able to truly describe the full range of passenger flows within airport terminals. As the route-choice decision-making of passengers involves many uncertain factors within the airport terminals, the mechanisms to fulfill the capacity of managing the route-choice have proven difficult to acquire and quantify. Could the study of cognitive factors of passengers (i.e. human mental preferences of deciding which on-airport facility to use) be useful to tackle these issues? Assuming the movement in virtual simulated environments can be analogous to movement in real environments, passenger behaviour dynamics can be similar to those generated in virtual experiments. Three levels of dynamics have been devised for motion control: the localised field, tactical level, and strategic level. A localised field refers to basic motion capabilities, such as walking speed, direction and avoidance of obstacles. The other two fields represent cognitive route-choice decision-making. This research views passenger flow problems via a "bottom-up approach", regarding individual passengers as independent intelligent agents who can behave autonomously and are able to interact with others and the ambient environment. In this regard, passenger flow formation becomes an emergent phenomenon of large numbers of passengers interacting with others. In the thesis, first, the passenger flow in airport terminals was investigated. Discretionary activities of passengers were integrated with standard processing procedures in the research. The localised field for passenger motion dynamics was constructed by a devised force-based model. Next, advanced traits of passengers (such as their desire to shop, their comfort with technology and their willingness to ask for assistance) were formulated to facilitate tactical route-choice decision-making. The traits consist of quantified measures of mental preferences of passengers when they travel through airport terminals. Each category of the traits indicates a decision which passengers may take. They were inferred through a Bayesian network model by analysing the probabilities based on currently available data. Route-choice decision-making was finalised by calculating corresponding utility results based on those probabilities observed. Three sorts of simulation outcomes were generated: namely, queuing length before checkpoints, average dwell time of passengers at service facilities, and instantaneous space utilisation. Queuing length reflects the number of passengers who are in a queue. Long queues no doubt cause significant delay in processing procedures. The dwell time of each passenger agent at the service facilities were recorded. The overall dwell time of passenger agents at typical facility areas were analysed so as to demonstrate portions of utilisation in the temporal aspect. For the spatial aspect, the number of passenger agents who were dwelling within specific terminal areas can be used to estimate service rates. All outcomes demonstrated specific results by typical simulated passenger flows. They directly reflect terminal capacity. The simulation results strongly suggest that integrating discretionary activities of passengers makes the passenger flows more intuitive, observing probabilities of mental preferences by inferring advanced traits make up an approach capable of carrying out tactical route-choice decision-making. On the whole, the research studied passenger flows in airport terminals by an agent-based model, which investigated individual characteristics of passengers and their impact on psychological route-choice decisions of passengers. Finally, intuitive passenger flows in airport terminals were able to be realised in simulation.
Resumo:
Electrostatic discharges have been identified as the most likely cause in a number of incidents of fire and explosion with unexplained ignitions. The lack of data and suitable models for this ignition mechanism creates a void in the analysis to quantify the importance of static electricity as a credible ignition mechanism. Quantifiable hazard analysis of the risk of ignition by static discharge cannot, therefore, be entirely carried out with our current understanding of this phenomenon. The study of electrostatics has been ongoing for a long time. However, it was not until the wide spread use of electronics that research was developed for the protection of electronics from electrostatic discharges. Current experimental models for electrostatic discharge developed for intrinsic safety with electronics are inadequate for ignition analysis and typically are not supported by theoretical analysis. A preliminary simulation and experiment with low voltage was designed to investigate the characteristics of energy dissipation and provided a basis for a high voltage investigation. It was seen that for a low voltage the discharge energy represents about 10% of the initial capacitive energy available and that the energy dissipation was within 10 ns of the initial discharge. The potential difference is greatest at the initial break down when the largest amount of the energy is dissipated. The discharge pathway is then established and minimal energy is dissipated as energy dissipation becomes greatly influenced by other components and stray resistance in the discharge circuit. From the initial low voltage simulation work, the importance of the energy dissipation and the characteristic of the discharge were determined. After the preliminary low voltage work was completed, a high voltage discharge experiment was designed and fabricated. Voltage and current measurement were recorded on the discharge circuit allowing the discharge characteristic to be recorded and energy dissipation in the discharge circuit calculated. Discharge energy calculations show consistency with the low voltage work relating to discharge energy with about 30-40% of the total initial capacitive energy being discharged in the resulting high voltage arc. After the system was characterised and operation validated, high voltage ignition energy measurements were conducted on a solution of n-Pentane evaporating in a 250 cm3 chamber. A series of ignition experiments were conducted to determine the minimum ignition energy of n-Pentane. The data from the ignition work was analysed with standard statistical regression methods for tests that return binary (yes/no) data and found to be in agreement with recent publications. The research demonstrates that energy dissipation is heavily dependent on the circuit configuration and most especially by the discharge circuit's capacitance and resistance. The analysis established a discharge profile for the discharges studied and validates the application of this methodology for further research into different materials and atmospheres; by systematically looking at discharge profiles of test materials with various parameters (e.g., capacitance, inductance, and resistance). Systematic experiments looking at the discharge characteristics of the spark will also help understand the way energy is dissipated in an electrostatic discharge enabling a better understanding of the ignition characteristics of materials in terms of energy and the dissipation of that energy in an electrostatic discharge.
Resumo:
This dissertation seeks to define and classify potential forms of Nonlinear structure and explore the possibilities they afford for the creation of new musical works. It provides the first comprehensive framework for the discussion of Nonlinear structure in musical works and provides a detailed overview of the rise of nonlinearity in music during the 20th century. Nonlinear events are shown to emerge through significant parametrical discontinuity at the boundaries between regions of relatively strong internal cohesion. The dissertation situates Nonlinear structures in relation to linear structures and unstructured sonic phenomena and provides a means of evaluating Nonlinearity in a musical structure through the consideration of the degree to which the structure is integrated, contingent, compressible and determinate as a whole. It is proposed that Nonlinearity can be classified as a three dimensional space described by three continuums: the temporal continuum, encompassing sequential and multilinear forms of organization, the narrative continuum encompassing processual, game structure and developmental narrative forms and the referential continuum encompassing stylistic allusion, adaptation and quotation. The use of spectrograms of recorded musical works is proposed as a means of evaluating Nonlinearity in a musical work through the visual representation of parametrical divergence in pitch, duration, timbre and dynamic over time. Spectral and structural analysis of repertoire works is undertaken as part of an exploration of musical nonlinearity and the compositional and performative features that characterize it. The contribution of cultural, ideological, scientific and technological shifts to the emergence of Nonlinearity in music is discussed and a range of compositional factors that contributed to the emergence of musical Nonlinearity is examined. The evolution of notational innovations from the mobile score to the screen score is plotted and a novel framework for the discussion of these forms of musical transmission is proposed. A computer coordinated performative model is discussed, in which a computer synchronises screening of notational information, provides temporal coordination of the performers through click-tracks or similar methods and synchronises the audio processing and synthesized elements of the work. It is proposed that such a model constitutes a highly effective means of realizing complex Nonlinear structures. A creative folio comprising 29 original works that explore nonlinearity is presented, discussed and categorised utilising the proposed classifications. Spectrograms of these works are employed where appropriate to illustrate the instantiation of parametrically divergent substructures and examples of structural openness through multiple versioning.
Resumo:
Parallel interleaved converters are finding more applications everyday, for example they are frequently used for VRMs on PC main boards mainly to obtain better transient response. Parallel interleaved converters can have their inductances uncoupled, directly coupled or inversely coupled, all of which have different applications with associated advantages and disadvantages. Coupled systems offer more control over converter features, such as ripple currents, inductance volume and transient response. To be able to gain an intuitive understanding of which type of parallel interleaved converter, what amount of coupling, what number of levels and how much inductance should be used for different applications a simple equivalent model is needed. As all phases of an interleaved converter are supposed to be identical, the equivalent model is nothing more than a separate inductance which is common to all phases. Without utilising this simplification the design of a coupled system is quite daunting. Being able to design a coupled system involves solving and understanding the RMS currents of the input, individual phase (or cell) and output. A procedure using this equivalent model and a small amount of modulo arithmetic is detailed.
Resumo:
Over the last decade, the majority of existing search techniques is either keyword- based or category-based, resulting in unsatisfactory effectiveness. Meanwhile, studies have illustrated that more than 80% of users preferred personalized search results. As a result, many studies paid a great deal of efforts (referred to as col- laborative filtering) investigating on personalized notions for enhancing retrieval performance. One of the fundamental yet most challenging steps is to capture precise user information needs. Most Web users are inexperienced or lack the capability to express their needs properly, whereas the existent retrieval systems are highly sensitive to vocabulary. Researchers have increasingly proposed the utilization of ontology-based tech- niques to improve current mining approaches. The related techniques are not only able to refine search intentions among specific generic domains, but also to access new knowledge by tracking semantic relations. In recent years, some researchers have attempted to build ontological user profiles according to discovered user background knowledge. The knowledge is considered to be both global and lo- cal analyses, which aim to produce tailored ontologies by a group of concepts. However, a key problem here that has not been addressed is: how to accurately match diverse local information to universal global knowledge. This research conducts a theoretical study on the use of personalized ontolo- gies to enhance text mining performance. The objective is to understand user information needs by a \bag-of-concepts" rather than \words". The concepts are gathered from a general world knowledge base named the Library of Congress Subject Headings. To return desirable search results, a novel ontology-based mining approach is introduced to discover accurate search intentions and learn personalized ontologies as user profiles. The approach can not only pinpoint users' individual intentions in a rough hierarchical structure, but can also in- terpret their needs by a set of acknowledged concepts. Along with global and local analyses, another solid concept matching approach is carried out to address about the mismatch between local information and world knowledge. Relevance features produced by the Relevance Feature Discovery model, are determined as representatives of local information. These features have been proven as the best alternative for user queries to avoid ambiguity and consistently outperform the features extracted by other filtering models. The two attempt-to-proposed ap- proaches are both evaluated by a scientific evaluation with the standard Reuters Corpus Volume 1 testing set. A comprehensive comparison is made with a num- ber of the state-of-the art baseline models, including TF-IDF, Rocchio, Okapi BM25, the deploying Pattern Taxonomy Model, and an ontology-based model. The gathered results indicate that the top precision can be improved remarkably with the proposed ontology mining approach, where the matching approach is successful and achieves significant improvements in most information filtering measurements. This research contributes to the fields of ontological filtering, user profiling, and knowledge representation. The related outputs are critical when systems are expected to return proper mining results and provide personalized services. The scientific findings have the potential to facilitate the design of advanced preference mining models, where impact on people's daily lives.
Resumo:
This thesis concerns the mathematical model of moving fluid interfaces in a Hele-Shaw cell: an experimental device in which fluid flow is studied by sandwiching the fluid between two closely separated plates. Analytic and numerical methods are developed to gain new insights into interfacial stability and bubble evolution, and the influence of different boundary effects is examined. In particular, the properties of the velocity-dependent kinetic undercooling boundary condition are analysed, with regard to the selection of only discrete possible shapes of travelling fingers of fluid, the formation of corners on the interface, and the interaction of kinetic undercooling with the better known effect of surface tension. Explicit solutions to the problem of an expanding or contracting ring of fluid are also developed.
Resumo:
This paper evaluates the efficiency of a number of popular corpus-based distributional models in performing discovery on very large document sets, including online collections. Literature-based discovery is the process of identifying previously unknown connections from text, often published literature, that could lead to the development of new techniques or technologies. Literature-based discovery has attracted growing research interest ever since Swanson's serendipitous discovery of the therapeutic effects of fish oil on Raynaud's disease in 1986. The successful application of distributional models in automating the identification of indirect associations underpinning literature-based discovery has been heavily demonstrated in the medical domain. However, we wish to investigate the computational complexity of distributional models for literature-based discovery on much larger document collections, as they may provide computationally tractable solutions to tasks including, predicting future disruptive innovations. In this paper we perform a computational complexity analysis on four successful corpus-based distributional models to evaluate their fit for such tasks. Our results indicate that corpus-based distributional models that store their representations in fixed dimensions provide superior efficiency on literature-based discovery tasks.
Resumo:
The early warning based on real-time prediction of rain-induced instability of natural residual slopes helps to minimise human casualties due to such slope failures. Slope instability prediction is complicated, as it is influenced by many factors, including soil properties, soil behaviour, slope geometry, and the location and size of deep cracks in the slope. These deep cracks can facilitate rainwater infiltration into the deep soil layers and reduce the unsaturated shear strength of residual soil. Subsequently, it can form a slip surface, triggering a landslide even in partially saturated soil slopes. Although past research has shown the effects of surface-cracks on soil stability, research examining the influence of deep-cracks on soil stability is very limited. This study aimed to develop methodologies for predicting the real-time rain-induced instability of natural residual soil slopes with deep cracks. The results can be used to warn against potential rain-induced slope failures. The literature review conducted on rain induced slope instability of unsaturated residual soil associated with soil crack, reveals that only limited studies have been done in the following areas related to this topic: - Methods for detecting deep cracks in residual soil slopes. - Practical application of unsaturated soil theory in slope stability analysis. - Mechanistic methods for real-time prediction of rain induced residual soil slope instability in critical slopes with deep cracks. Two natural residual soil slopes at Jombok Village, Ngantang City, Indonesia, which are located near a residential area, were investigated to obtain the parameters required for the stability analysis of the slope. A survey first identified all related field geometrical information including slope, roads, rivers, buildings, and boundaries of the slope. Second, the electrical resistivity tomography (ERT) method was used on the slope to identify the location and geometrical characteristics of deep cracks. The two ERT array models employed in this research are: Dipole-dipole and Azimuthal. Next, bore-hole tests were conducted at different locations in the slope to identify soil layers and to collect undisturbed soil samples for laboratory measurement of the soil parameters required for the stability analysis. At the same bore hole locations, Standard Penetration Test (SPT) was undertaken. Undisturbed soil samples taken from the bore-holes were tested in a laboratory to determine the variation of the following soil properties with the depth: - Classification and physical properties such as grain size distribution, atterberg limits, water content, dry density and specific gravity. - Saturated and unsaturated shear strength properties using direct shear apparatus. - Soil water characteristic curves (SWCC) using filter paper method. - Saturated hydraulic conductivity. The following three methods were used to detect and simulate the location and orientation of cracks in the investigated slope: (1) The electrical resistivity distribution of sub-soil obtained from ERT. (2) The profile of classification and physical properties of the soil, based on laboratory testing of soil samples collected from bore-holes and visual observations of the cracks on the slope surface. (3) The results of stress distribution obtained from 2D dynamic analysis of the slope using QUAKE/W software, together with the laboratory measured soil parameters and earthquake records of the area. It was assumed that the deep crack in the slope under investigation was generated by earthquakes. A good agreement was obtained when comparing the location and the orientation of the cracks detected by Method-1 and Method-2. However, the simulated cracks in Method-3 were not in good agreement with the output of Method-1 and Method-2. This may have been due to the material properties used and the assumptions made, for the analysis. From Method-1 and Method-2, it can be concluded that the ERT method can be used to detect the location and orientation of a crack in a soil slope, when the ERT is conducted in very dry or very wet soil conditions. In this study, the cracks detected by the ERT were used for stability analysis of the slope. The stability of the slope was determined using the factor of safety (FOS) of a critical slip surface obtained by SLOPE/W using the limit equilibrium method. Pore-water pressure values for the stability analysis were obtained by coupling the transient seepage analysis of the slope using finite element based software, called SEEP/W. A parametric study conducted on the stability of an investigated slope revealed that the existence of deep cracks and their location in the soil slope are critical for its stability. The following two steps are proposed to predict the rain-induced instability of a residual soil slope with cracks. (a) Step-1: The transient stability analysis of the slope is conducted from the date of the investigation (initial conditions are based on the investigation) to the preferred date (current date), using measured rainfall data. Then, the stability analyses are continued for the next 12 months using the predicted annual rainfall that will be based on the previous five years rainfall data for the area. (b) Step-2: The stability of the slope is calculated in real-time using real-time measured rainfall. In this calculation, rainfall is predicted for the next hour or 24 hours and the stability of the slope is calculated one hour or 24 hours in advance using real time rainfall data. If Step-1 analysis shows critical stability for the forthcoming year, it is recommended that Step-2 be used for more accurate warning against the future failure of the slope. In this research, the results of the application of the Step-1 on an investigated slope (Slope-1) showed that its stability was not approaching a critical value for year 2012 (until 31st December 2012) and therefore, the application of Step-2 was not necessary for the year 2012. A case study (Slope-2) was used to verify the applicability of the complete proposed predictive method. A landslide event at Slope-2 occurred on 31st October 2010. The transient seepage and stability analyses of the slope using data obtained from field tests such as Bore-hole, SPT, ERT and Laboratory tests, were conducted on 12th June 2010 following the Step-1 and found that the slope in critical condition on that current date. It was then showing that the application of the Step-2 could have predicted this failure by giving sufficient warning time.
Resumo:
Electricity is the cornerstone of modern life. It is essential to economic stability and growth, jobs and improved living standards. Electricity is also the fundamental ingredient for a dignified life; it is the source of such basic human requirements as cooked food, a comfortable living temperature and essential health care. For these reasons, it is unimaginable that today's economies could function without electricity and the modern energy services that it delivers. Somewhat ironically, however, the current approach to electricity generation also contributes to two of the gravest and most persistent problems threatening the livelihood of humans. These problems are anthropogenic climate change and sustained human poverty. To address these challenges, the global electricity sector must reduce its reliance on fossil fuel sources. In this context, the object of this research is twofold. Initially it is to consider the design of the Renewable Energy (Electricity) Act 2000 (Cth) (Renewable Electricity Act), which represents Australia's primary regulatory approach to increase the production of renewable sourced electricity. This analysis is conducted by reference to the regulatory models that exist in Germany and Great Britain. Within this context, this thesis then evaluates whether the Renewable Electricity Act is designed effectively to contribute to a more sustainable and dignified electricity generation sector in Australia. On the basis of the appraisal of the Renewable Electricity Act, this thesis contends that while certain aspects of the regulatory regime have merit, ultimately its design does not represent an effective and coherent regulatory approach to increase the production of renewable sourced electricity. In this regard, this thesis proposes a number of recommendations to reform the existing regime. These recommendations are not intended to provide instantaneous or simple solutions to the current regulatory regime. Instead, the purpose of these recommendations is to establish the legal foundations for an effective regulatory regime that is designed to increase the production of renewable sourced electricity in Australia in order to contribute to a more sustainable and dignified approach to electricity production.
Resumo:
Introduction. The purpose of this chapter is to address the question raised in the chapter title. Specifically, how can models of motor control help us understand low back pain (LBP)? There are several classes of models that have been used in the past for studying spinal loading, stability, and risk of injury (see Reeves and Cholewicki (2003) for a review of past modeling approaches), but for the purpose of this chapter we will focus primarily on models used to assess motor control and its effect on spine behavior. This chapter consists of 4 sections. The first section discusses why a shift in modeling approaches is needed to study motor control issues. We will argue that the current approach for studying the spine system is limited and not well-suited for assessing motor control issues related to spine function and dysfunction. The second section will explore how models can be used to gain insight into how the central nervous system (CNS) controls the spine. This segues segue nicely into the next section that will address how models of motor control can be used in the diagnosis and treatment of LBP. Finally, the last section will deal with the issue of model verification and validity. This issue is important since modelling accuracy is critical for obtaining useful insight into the behavior of the system being studied. This chapter is not intended to be a critical review of the literature, but instead intended to capture some of the discussion raised during the 2009 Spinal Control Symposium, with some elaboration on certain issues. Readers interested in more details are referred to the cited publications.
Resumo:
This project’s aim was to create new experimental models in small animals for the investigation of infections related to bone fracture fixation implants. Animal models are essential in orthopaedic trauma research and this study evaluated new implants and surgical techniques designed to improve standardisation in these experiments, and ultimately to minimise the number of animals needed in future work. This study developed and assessed procedures using plates and inter-locked nails to stabilise fractures in rabbit thigh bones. Fracture healing was examined with mechanical testing and histology. The results of this work contribute to improvements in future small animal infection experiments.
Resumo:
Pavlovian fear conditioning is a robust technique for examining behavioral and cellular components of fear learning and memory. In fear conditioning, the subject learns to associate a previously neutral stimulus with an inherently noxious co-stimulus. The learned association is reflected in the subjects' behavior upon subsequent re-exposure to the previously neutral stimulus or the training environment. Using fear conditioning, investigators can obtain a large amount of data that describe multiple aspects of learning and memory. In a single test, researchers can evaluate functional integrity in fear circuitry, which is both well characterized and highly conserved across species. Additionally, the availability of sensitive and reliable automated scoring software makes fear conditioning amenable to high-throughput experimentation in the rodent model; thus, this model of learning and memory is particularly useful for pharmacological and toxicological screening. Due to the conserved nature of fear circuitry across species, data from Pavlovian fear conditioning are highly translatable to human models. We describe equipment and techniques needed to perform and analyze conditioned fear data. We provide two examples of fear conditioning experiments, one in rats and one in mice, and the types of data that can be collected in a single experiment. © 2012 Springer Science+Business Media, LLC.
Resumo:
Pavlovian fear conditioning, also known as classical fear conditioning is an important model in the study of the neurobiology of normal and pathological fear. Progress in the neurobiology of Pavlovian fear also enhances our understanding of disorders such as posttraumatic stress disorder (PTSD) and with developing effective treatment strategies. Here we describe how Pavlovian fear conditioning is a key tool for understanding both the neurobiology of fear and the mechanisms underlying variations in fear memory strength observed across different phenotypes. First we discuss how Pavlovian fear models aspects of PTSD. Second, we describe the neural circuits of Pavlovian fear and the molecular mechanisms within these circuits that regulate fear memory. Finally, we show how fear memory strength is heritable; and describe genes which are specifically linked to both changes in Pavlovian fear behavior and to its underlying neural circuitry. These emerging data begin to define the essential genes, cells and circuits that contribute to normal and pathological fear.
Communication models of institutional online communities : the role of the ABC cultural intermediary
Resumo:
The co-creation of cultural artefacts has been democratised given the recent technological affordances of information and communication technologies. Web 2.0 technologies have enabled greater possibilities of citizen inclusion within the media conversations of their nations. For example, the Australian audience has more opportunities to collaboratively produce and tell their story to a broader audience via the public service media (PSM) facilitated platforms of the Australian Broadcasting Corporation (ABC). However, providing open collaborative production for the audience gives rise to the problem, how might the PSM manage the interests of all the stakeholders and align those interests with its legislated Charter? This paper considers this problem through the ABC’s user-created content participatory platform, ABC Pool and highlights the cultural intermediary as the role responsible for managing these tensions. This paper also suggests cultural intermediation is a useful framework for other media organisations engaging in co-creative activities with their audiences.