900 resultados para Which-way experiments
Resumo:
The feasibility of using an in-hardware implementation of a genetic algorithm (GA) to solve the computationally expensive travelling salesman problem (TSP) is explored, especially in regard to hardware resource requirements for problem and population sizes. We investigate via numerical experiments whether a small population size might prove sufficient to obtain reasonable quality solutions for the TSP, thereby permitting relatively resource efficient hardware implementation on field programmable gate arrays (FPGAs). Software experiments on two TSP benchmarks involving 48 and 532 cities were used to explore the extent to which population size can be reduced without compromising solution quality, and results show that a GA allowed to run for a large number of generations with a smaller population size can yield solutions of comparable quality to those obtained using a larger population. This finding is then used to investigate feasible problem sizes on a targeted Virtex-7 vx485T-2 FPGA platform via exploration of hardware resource requirements for memory and data flow operations.
Resumo:
Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.
Resumo:
A simple experimental apparatus is described in which a wide variety of vapor phase nucleation studies of refractory materials could be performed aboard NASA's KC-135 Research Aircraft. The chief advantage of a microgravity environment for these studies is the expected absence of thermally driven convective motions in the gas. The absence of convection leads to much more accurate knowledge of both the temperature distribution in the system and the time evolution of the refractory vapor concentration as a function of distance from the crucible.The evolution of the apparatus will be described as more experience is gained with the microgravity environment. Such experiments will be used to prepare for similar ones carried out aboard either the shuttle or Space Station where considerably longer duration experiments are possible.
Resumo:
There are many continuum mechanical models have been developed such as liquid drop models, solid models, and so on for single living cell biomechanics studies. However, these models do not give a fully approach to exhibit a clear understanding of the behaviour of single living cells such as swelling behaviour, drag effect, etc. Hence, the porohyperelastic (PHE) model which can capture those aspects would be a good candidature to study cells behaviour (e.g. chondrocytes in this study). In this research, an FEM model of single chondrocyte cell will be developed by using this PHE model to simulate Atomic Force Microscopy (AFM) experimental results with the variation of strain rate. This material model will be compared with viscoelastic model to demonstrate the advantages of PHE model. The results have shown that the maximum value of force applied of PHE model is lower at lower strain rates. This is because the mobile fluid does not have enough time to exude in case of very high strain rate and also due to the lower permeability of the membrane than that of the protoplasm of chondrocyte. This behavior is barely observed in viscoelastic model. Thus, PHE model is the better model for cell biomechanics studies.
Resumo:
Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.
Resumo:
The trial in Covecorp Constructions Pty Ltd v Indigo Projects Pty Ltd (File no BS 10157 of 2001; BS 2763 of 2002) commenced on 8 October 2007 before Fryberg J, but the matter settled on 6 November 2007 before the conclusion of the trial. This case was conducted as an “electronic trial” with the use of technology developed within the court. This was the first case in Queensland to employ this technology at trial level. The Court’s aim was to find a means to capture the key benefits which are offered by the more sophisticated trial presentation software of commercial service providers, in a way that was inexpensive for the parties and would facilitate the adoption of technology at trial much more broadly than has been the case to date.
Resumo:
Australian journalism schools are full of students who have never met an Aboriginal or Torres Strait Island person and who do not know their history. Journalism educators are ill-equipped to redress this imbalance as a large majority are themselves non-Indigenous and many have had little or no experience with the coverage of Indigenous issues or knowledge of Indigenous affairs. Such a situation calls for educational approaches that can overcome these disadvantages and empower journalism graduates to move beyond the stereotypes that characterize the representation of Indigenous people in the mainstream media. This article will explore three different courses in three Australian tertiary journalism education institutions, which use Work-Integrated Learning Approaches to instil the cultural competencies necessary to encourage a more informed reporting of Indigenous issues. The findings from the three projects illustrate the importance of adopting a collaborative approach by industry, the Indigenous community and educators to encourage students’ commitment to quality journalism practices when covering Indigenous issues.
Resumo:
Computer Experiments, consisting of a number of runs of a computer model with different inputs, are now common-place in scientific research. Using a simple fire model for illustration some guidelines are given for the size of a computer experiment. A graph is provided relating the error of prediction to the sample size which should be of use when designing computer experiments. Methods for augmenting computer experiments with extra runs are also described and illustrated. The simplest method involves adding one point at a time choosing that point with the maximum prediction variance. Another method that appears to work well is to choose points from a candidate set with maximum determinant of the variance covariance matrix of predictions.
Resumo:
Deterministic computer simulations of physical experiments are now common techniques in science and engineering. Often, physical experiments are too time consuming, expensive or impossible to conduct. Complex computer models or codes, rather than physical experiments lead to the study of computer experiments, which are used to investigate many scientific phenomena of this nature. A computer experiment consists of a number of runs of the computer code with different input choices. The Design and Analysis of Computer Experiments is a rapidly growing technique in statistical experimental design. This thesis investigates some practical issues in the design and analysis of computer experiments and attempts to answer some of the questions faced by experimenters using computer experiments. In particular, the question of the number of computer experiments and how they should be augmented is studied and attention is given to when the response is a function over time.
Resumo:
The ability of a piezoelectric transducer in energy conversion is rapidly expanding in several applications. Some of the industrial applications for which a high power ultrasound transducer can be used are surface cleaning, water treatment, plastic welding and food sterilization. Also, a high power ultrasound transducer plays a great role in biomedical applications such as diagnostic and therapeutic applications. An ultrasound transducer is usually applied to convert electrical energy to mechanical energy and vice versa. In some high power ultrasound system, ultrasound transducers are applied as a transmitter, as a receiver or both. As a transmitter, it converts electrical energy to mechanical energy while a receiver converts mechanical energy to electrical energy as a sensor for control system. Once a piezoelectric transducer is excited by electrical signal, piezoelectric material starts to vibrate and generates ultrasound waves. A portion of the ultrasound waves which passes through the medium will be sensed by the receiver and converted to electrical energy. To drive an ultrasound transducer, an excitation signal should be properly designed otherwise undesired signal (low quality) can deteriorate the performance of the transducer (energy conversion) and increase power consumption in the system. For instance, some portion of generated power may be delivered in unwanted frequency which is not acceptable for some applications especially for biomedical applications. To achieve better performance of the transducer, along with the quality of the excitation signal, the characteristics of the high power ultrasound transducer should be taken into consideration as well. In this regard, several simulation and experimental tests are carried out in this research to model high power ultrasound transducers and systems. During these experiments, high power ultrasound transducers are excited by several excitation signals with different amplitudes and frequencies, using a network analyser, a signal generator, a high power amplifier and a multilevel converter. Also, to analyse the behaviour of the ultrasound system, the voltage ratio of the system is measured in different tests. The voltage across transmitter is measured as an input voltage then divided by the output voltage which is measured across receiver. The results of the transducer characteristics and the ultrasound system behaviour are discussed in chapter 4 and 5 of this thesis. Each piezoelectric transducer has several resonance frequencies in which its impedance has lower magnitude as compared to non-resonance frequencies. Among these resonance frequencies, just at one of those frequencies, the magnitude of the impedance is minimum. This resonance frequency is known as the main resonance frequency of the transducer. To attain higher efficiency and deliver more power to the ultrasound system, the transducer is usually excited at the main resonance frequency. Therefore, it is important to find out this frequency and other resonance frequencies. Hereof, a frequency detection method is proposed in this research which is discussed in chapter 2. An extended electrical model of the ultrasound transducer with multiple resonance frequencies consists of several RLC legs in parallel with a capacitor. Each RLC leg represents one of the resonance frequencies of the ultrasound transducer. At resonance frequency the inductor reactance and capacitor reactance cancel out each other and the resistor of this leg represents power conversion of the system at that frequency. This concept is shown in simulation and test results presented in chapter 4. To excite a high power ultrasound transducer, a high power signal is required. Multilevel converters are usually applied to generate a high power signal but the drawback of this signal is low quality in comparison with a sinusoidal signal. In some applications like ultrasound, it is extensively important to generate a high quality signal. Several control and modulation techniques are introduced in different papers to control the output voltage of the multilevel converters. One of those techniques is harmonic elimination technique. In this technique, switching angles are chosen in such way to reduce harmonic contents in the output side. It is undeniable that increasing the number of the switching angles results in more harmonic reduction. But to have more switching angles, more output voltage levels are required which increase the number of components and cost of the converter. To improve the quality of the output voltage signal with no more components, a new harmonic elimination technique is proposed in this research. Based on this new technique, more variables (DC voltage levels and switching angles) are chosen to eliminate more low order harmonics compared to conventional harmonic elimination techniques. In conventional harmonic elimination method, DC voltage levels are same and only switching angles are calculated to eliminate harmonics. Therefore, the number of eliminated harmonic is limited by the number of switching cycles. In the proposed modulation technique, the switching angles and the DC voltage levels are calculated off-line to eliminate more harmonics. Therefore, the DC voltage levels are not equal and should be regulated. To achieve this aim, a DC/DC converter is applied to adjust the DC link voltages with several capacitors. The effect of the new harmonic elimination technique on the output quality of several single phase multilevel converters is explained in chapter 3 and 6 of this thesis. According to the electrical model of high power ultrasound transducer, this device can be modelled as parallel combinations of RLC legs with a main capacitor. The impedance diagram of the transducer in frequency domain shows it has capacitive characteristics in almost all frequencies. Therefore, using a voltage source converter to drive a high power ultrasound transducer can create significant leakage current through the transducer. It happens due to significant voltage stress (dv/dt) across the transducer. To remedy this problem, LC filters are applied in some applications. For some applications such as ultrasound, using a LC filter can deteriorate the performance of the transducer by changing its characteristics and displacing the resonance frequency of the transducer. For such a case a current source converter could be a suitable choice to overcome this problem. In this regard, a current source converter is implemented and applied to excite the high power ultrasound transducer. To control the output current and voltage, a hysteresis control and unipolar modulation are used respectively. The results of this test are explained in chapter 7.
Resumo:
Our task is to consider the evolving perspectives around curriculum documented in the Theory Into Practice (TIP) corpus to date. The 50 years in question, 1962–2012, account for approximately half the history of mass institutionalized schooling. Over this time, the upper age of compulsory schooling has crept up, stretching the school curriculum's reach, purpose, and clientele. These years also span remarkable changes in the social fabric, challenging deep senses of the nature and shelf-life of knowledge, whose knowledge counts, what science can and cannot deliver, and the very purpose of education. The school curriculum is a key social site where these challenges have to be addressed in a very practical sense, through a design on the future implemented within the resources and politics of the present. The task's metaphor of ‘evolution’ may invoke a sense of gradual cumulative improvement, but equally connotes mutation, hybridization, extinction, survival of the fittest, and environmental pressures. Viewed in this way, curriculum theory and practice cannot be isolated and studied in laboratory conditions—there is nothing natural, neutral, or self-evident about what knowledge gets selected into the curriculum. Rather, the process of selection unfolds as a series of messy, politically contaminated, lived experiments; thus curriculum studies require field work in dynamic open systems. We subscribe to Raymond Williams' approach to social change, which he argues is not absolute and abrupt, one set of ideas neatly replacing the other. For Williams, newly emergent ideas have to compete against the dominant mindset and residual ideas “still active in the cultural process'” (Williams, 1977, p. 122). This means ongoing debates. For these reasons, we join Schubert (1992) in advocating “continuous reconceptualising of the flow of experience” (p. 238) by both researchers and practitioners.
Resumo:
Modernized GPS and GLONASS, together with new GNSS systems, BeiDou and Galileo, offer code and phase ranging signals in three or more carriers. Traditionally, dual-frequency code and/or phase GPS measurements are linearly combined to eliminate effects of ionosphere delays in various positioning and analysis. This typical treatment method has imitations in processing signals at three or more frequencies from more than one system and can be hardly adapted itself to cope with the booming of various receivers with a broad variety of singles. In this contribution, a generalized-positioning model that the navigation system independent and the carrier number unrelated is promoted, which is suitable for both single- and multi-sites data processing. For the synchronization of different signals, uncalibrated signal delays (USD) are more generally defined to compensate the signal specific offsets in code and phase signals respectively. In addition, the ionospheric delays are included in the parameterization with an elaborate consideration. Based on the analysis of the algebraic structures, this generalized-positioning model is further refined with a set of proper constrains to regularize the datum deficiency of the observation equation system. With this new model, uncalibrated signal delays (USD) and ionospheric delays are derived for both GPS and BeiDou with a large dada set. Numerical results demonstrate that, with a limited number of stations, the uncalibrated code delays (UCD) are determinate to a precision of about 0.1 ns for GPS and 0.4 ns for BeiDou signals, while the uncalibrated phase delays (UPD) for L1 and L2 are generated with 37 stations evenly distributed in China for GPS with a consistency of about 0.3 cycle. Extra experiments concerning the performance of this novel model in point positioning with mixed-frequencies of mixed-constellations is analyzed, in which the USD parameters are fixed with our generated values. The results are evaluated in terms of both positioning accuracy and convergence time.
Resumo:
Dedicated Short Range Communication (DSRC) is the emerging key technology supporting cooperative road safety systems within Intelligent Transportation Systems (ITS). The DSRC protocol stack includes a variety of standards such as IEEE 802.11p and SAE J2735. The effectiveness of the DSRC technology depends on not only the interoperable cooperation of these standards, but also on the interoperability of DSRC devices manufactured by various manufacturers. To address the second constraint, the SAE defines a message set dictionary under the J2735 standard for construction of device independent messages. This paper focuses on the deficiencies of the SAE J2735 standard being developed for deployment in Vehicular Ad-hoc Networks (VANET). In this regard, the paper discusses the way how a Basic Safety Message (BSM) as the fundamental message type defined in SAE J2735 is constructed, sent and received by safety communication platforms to provide a comprehensive device independent solution for Cooperative ITS (C-ITS). This provides some insight into the technical knowledge behind the construction and exchange of BSMs within VANET. A series of real-world DSRC data collection experiments was conducted. The results demonstrate that the reliability and throughput of DSRC highly depend on the applications utilizing the medium. Therefore, an active application-dependent medium control measure, using a novel message-dissemination frequency controller, is introduced. This application level message handler improves the reliability of both BSM transmissions/receptions and the Application layer error handling which is extremely vital to decentralized congestion control (DCC) mechanisms.
Resumo:
Proud suggested that the biggest and most obvious impact of the digital world felt by academics, was in the area of teaching. He demonstrated a number of the initiatives which have been by developed by outside organizations and within various universities. Those include larger classrooms, online teaching and Blackboard. All of these were believed to provide improved learning by students, but, most commonly also expanded the faculty workload. He then discussed a number of the newer technologies which are becoming available such as the virtual classroom, Google Glass, Adobe online, Skype and others. All of these tools, he argued were in response to increasing economic pressures on the University, the result of which is that entire courses have migrated online. The reason for university interest in these new technologies were listed as reduced need for classrooms and classroom space, less need for on-campus facilities and even a decline in need for weekly in-class lectures. Thus, it has been argued that these new tools and technologies liberate the faculty from the tyranny of geography through the introduction of blogs, online videos, discussion forums and communication tools such as wikis, Facebook sites and Yammer, all of which seem to have specific advantages. The question raised, however, is: How successful have these new digital innovations been? As an example, he cited his own experience in teaching distance learning programs in Thailand and elsewhere. Those results are still being reviewed, with no definitive view developed.
Resumo:
A fear of imminent information overload predates the World Wide Web by decades. Yet, that fear has never abated. Worse, as the World Wide Web today takes the lion’s share of the information we deal with, both in amount and in time spent gathering it, the situation has only become more precarious. This chapter analyses new issues in information overload that have emerged with the advent of the Web, which emphasizes written communication, defined in this context as the exchange of ideas expressed informally, often casually, as in verbal language. The chapter focuses on three ways to mitigate these issues. First, it helps us, the users, to be more specific in what we ask for. Second, it helps us amend our request when we don't get what we think we asked for. And third, since only we, the human users, can judge whether the information received is what we want, it makes retrieval techniques more effective by basing them on how humans structure information. This chapter reports on extensive experiments we conducted in all three areas. First, to let users be more specific in describing an information need, they were allowed to express themselves in an unrestricted conversational style. This way, they could convey their information need as if they were talking to a fellow human instead of using the two or three words typically supplied to a search engine. Second, users were provided with effective ways to zoom in on the desired information once potentially relevant information became available. Third, a variety of experiments focused on the search engine itself as the mediator between request and delivery of information. All examples that are explained in detail have actually been implemented. The results of our experiments demonstrate how a human-centered approach can reduce information overload in an area that grows in importance with each day that passes. By actually having built these applications, I present an operational, not just aspirational approach.