987 resultados para Interaction Techniques
Resumo:
The planning and management of water resources in the Pioneer Valley, north-eastern Australia requires a tool for assessing the impact of groundwater and stream abstractions on water supply reliabilities and environmental flows in Sandy Creek (the main surface water system studied). Consequently, a fully coupled stream-aquifer model has been constructed using the code MODHMS, calibrated to near-stream observations of watertable behaviour and multiple components of gauged stream flow. This model has been tested using other methods of estimation, including stream depletion analysis and radon isotope tracer sampling. The coarseness of spatial discretisation, which is required for practical reasons of computational efficiency, limits the model's capacity to simulate small-scale processes (e.g., near-stream groundwater pumping, bank storage effects), and alternative approaches are required to complement the model's range of applicability. Model predictions of groundwater influx to Sandy Creek are compared with baseflow estimates from three different hydrograph separation techniques, which were found to be unable to reflect the dynamics of Sandy Creek stream-aquifer interactions. The model was also used to infer changes in the water balance of the system caused by historical land use change. This led to constraints on the recharge distribution which can be implemented to improve model calibration performance. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The design and synthesis of biomaterials covers a growing number of biomedical applications. The use of biomaterials in biological environment is associated with a number of problems, the most important of which is biocompatabUity. If the implanted biomaterial is not compatible with the environment, it will be rejected by the biological site. This may be manifested in many ways depending on the environment in which it is used. Adsorption of proteins takes place almost instantaneously when a biomaterial comes into contact with most biological fluids. The eye is a unique body site for the study of protein interactions with biomaterials, because of its ease of access and deceptive complexity of the tears. The use of contact lenses for either vision correction and cosmetic reasons or as a route for the controlled drug delivery, has significantly increased in recent years. It is relatively easy to introduce a contact lens Into the tear fluid and remove after a few minutes without surgery or trauma to the patient. A range of analytical techniques were used and developed to measure the proteins absorbed to some existing commercial contact lens materials and also to novel hydrogels synthesised within the research group. Analysis of the identity and quantity of proteins absorbed to biomaterials revealed the importance of many factors on the absorption process. The effect of biomaterial structure, protein nature in terms of size. shape and charge and pH of the environment on the absorption process were examined in order to determine the relative up-take of tear proteins. This study showed that both lysozyme and lactoferrin penetrate the lens matrix of ionic materials. Measurement of the mobility and activity of the protein deposited into the surface and within the matrix of ionic lens materials demonstrated that the mobility is pH dependent and, within the experimental errors, the biological activity of lysozyme remained unchanged after adsorption and desorption. The study on the effect of different monomers copolymerised with hydroxyethyl methacrylate (HEMA) on the protein up-take showed that monomers producing a positive charge on the copolymer can reduce the spoilation with lysozyme. The studies were extended to real cases in order to compare the patient dependent factors. The in-vivo studies showed that the spoilation is patient dependent as well as other factors. Studies on the extrinsic factors such as dye used in colour lenses showed that the addition of colourant affects protein absorption and, in one case, its effect is beneficial to the wearer as it reduces the quantity of the protein absorbed.
Resumo:
Humic substances are the major organic constituents of soils and sediments. They are heterogeneous, polyfunctional, polydisperse, macromolecular and have no accurately known chemical structure. Their interactions with radionuclides are particularly important since they provide leaching mechanisms from disposal sites. The central theme to this research is the interaction of heavy metal actinide analogues with humic materials. Studies described focus on selected aspects of the characteristics and properties of humic substances. Some novel approaches to experiments and data analysis are pursued. Several humic substances are studied; all but one are humic acids, and those used most extensively were obtained commercially. Some routine characterisation techniques are applied to samples in the first instance. Humic substances are coloured, but their ultra-violet and visible absorption spectra are featureless. Yet, they fluoresce over a wide range of wavelengths. Enhanced fluorescence in the presence of luminescent europium(III) ions is explained by energy transfer from irradiated humic acid to the metal ion in a photophysical model. Nuclear magnetic resonance spectroscopy is applied to the study of humic acids and their complexes with heavy metals. Proton and carbon-13 NMR provides some structural and functionality information; Paramagnetic lanthanide ions affect these spectra. Some heavy metals are studied as NMR nuclei, but measurements are restricted by their sensitivity. A humic acid is fractionated yielding a broad molecular weight distribution. Electrophoretic mobilities and particle radii determined by Laser Doppler Electrophoretic Light Scattering are sensitive to the conditions of the supporting media, and the concentration and particle size distribution of humic substances. In potentiometric titrations of humate dispersions, the organic matter responds slowly and the mineral acid addition is buffered. Proton concentration data is modelled and a mechanism is proposed involving two key stages, both resulting in proton release after some conformational changes.
Resumo:
The investigations described in this thesis concern the molecular interactions between polar solute molecules and various aromatic compounds in solution. Three different physical methods were employed. Nuclear magnetic resonance (n.m.r.) spectroscopy was used to determine the nature and strength of the interactions and the geometry of the transient complexes formed. Cryoscopic studies were used to provide information on the stoichiometry of the complexes. Dielectric constant studies were conducted in an attempt to confirm and supplement the spectroscopic investigations. The systems studied were those between nitromethane, chloroform, acetonitrile (solutes) and various methyl substituted benzenes. In the n.m.r. work the dependence of the solute chemical shift upon the compositions of the solutions was determined. From this the equilibrium quotients (K) for the formation of each complex and the shift induced in the solute proton by the aromatic in the complex were evaluated. The thermodynamic parameters for the interactions were obtained from the determination of K at several temperatures. The stoichiometries of the complexes obtained from cryoscopic studies were found to agree with those deduced from spectroscopic investigations. For most systems it is suggested that only one type of complex, of 1:1 stiochiometry, predominates except that for the acetonitrile-benzene system a 1:2 complex is formed. Two sets of dielectric studies were conducted, the first to show that the nature of the interaction is dipole-induced dipole and the second to calculate K. The equilibrium quotients obtained from spectroscopic and dielectric studies are compared. Time-averaged geometries of the complexes are proposed. The orientation of solute, with respect to the aromatic for the 1:1 complexes, appears to be the one in which the solute lies symmetrically about the aromatic six-fold axis whereas for the 1:2 complex, a sandwich structure is proposed. It is suggested that the complexes are formed through a dipole-induced dipole interaction and steric factors play some part in the complex formation.
Resumo:
In order to bridge the “Semantic gap”, a number of relevance feedback (RF) mechanisms have been applied to content-based image retrieval (CBIR). However current RF techniques in most existing CBIR systems still lack satisfactory user interaction although some work has been done to improve the interaction as well as the search accuracy. In this paper, we propose a four-factor user interaction model and investigate its effects on CBIR by an empirical evaluation. Whilst the model was developed for our research purposes, we believe the model could be adapted to any content-based search system.
Resumo:
Atomisation of an aqueous solution for tablet film coating is a complex process with multiple factors determining droplet formation and properties. The importance of droplet size for an efficient process and a high quality final product has been noted in the literature, with smaller droplets reported to produce smoother, more homogenous coatings whilst simultaneously avoiding the risk of damage through over-wetting of the tablet core. In this work the effect of droplet size on tablet film coat characteristics was investigated using X-ray microcomputed tomography (XμCT) and confocal laser scanning microscopy (CLSM). A quality by design approach utilising design of experiments (DOE) was used to optimise the conditions necessary for production of droplets at a small (20 μm) and large (70 μm) droplet size. Droplet size distribution was measured using real-time laser diffraction and the volume median diameter taken as a response. DOE yielded information on the relationship three critical process parameters: pump rate, atomisation pressure and coating-polymer concentration, had upon droplet size. The model generated was robust, scoring highly for model fit (R2 = 0.977), predictability (Q2 = 0.837), validity and reproducibility. Modelling confirmed that all parameters had either a linear or quadratic effect on droplet size and revealed an interaction between pump rate and atomisation pressure. Fluidised bed coating of tablet cores was performed with either small or large droplets followed by CLSM and XμCT imaging. Addition of commonly used contrast materials to the coating solution improved visualisation of the coating by XμCT, showing the coat as a discrete section of the overall tablet. Imaging provided qualitative and quantitative evidence revealing that smaller droplets formed thinner, more uniform and less porous film coats.
A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications
Resumo:
The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.
Resumo:
The HIV-1 genome contains several genes coding for auxiliary proteins, including the small Vpr protein. Vpr affects the integrity of the nuclear envelope and participates in the nuclear translocation of the preintegration complex containing the viral DNA. Here, we show by photobleaching experiments performed on living cells expressing a Vpr-green fluorescent protein fusion that the protein shuttles between the nucleus and the cytoplasm, but a significant fraction is concentrated at the nuclear envelope, supporting the hypothesis that Vpr interacts with components of the nuclear pore complex. An interaction between HIV-1 Vpr and the human nucleoporin CG1 (hCG1) was revealed in the yeast two-hybrid system, and then confirmed both in vitro and in transfected cells. This interaction does not involve the FG repeat domain of hCG1 but rather the N-terminal region of the protein. Using a nuclear import assay based on digitonin-permeabilized cells, we demonstrate that hCG1 participates in the docking of Vpr at the nuclear envelope. This association of Vpr with a component of the nuclear pore complex may contribute to the disruption of the nuclear envelope and to the nuclear import of the viral DNA.
Resumo:
This thesis addresses the Batch Reinforcement Learning methods in Robotics. This sub-class of Reinforcement Learning has shown promising results and has been the focus of recent research. Three contributions are proposed that aim to extend the state-of-art methods allowing for a faster and more stable learning process, such as required for learning in Robotics. The Q-learning update-rule is widely applied, since it allows to learn without the presence of a model of the environment. However, this update-rule is transition-based and does not take advantage of the underlying episodic structure of collected batch of interactions. The Q-Batch update-rule is proposed in this thesis, to process experiencies along the trajectories collected in the interaction phase. This allows a faster propagation of obtained rewards and penalties, resulting in faster and more robust learning. Non-parametric function approximations are explored, such as Gaussian Processes. This type of approximators allows to encode prior knowledge about the latent function, in the form of kernels, providing a higher level of exibility and accuracy. The application of Gaussian Processes in Batch Reinforcement Learning presented a higher performance in learning tasks than other function approximations used in the literature. Lastly, in order to extract more information from the experiences collected by the agent, model-learning techniques are incorporated to learn the system dynamics. In this way, it is possible to augment the set of collected experiences with experiences generated through planning using the learned models. Experiments were carried out mainly in simulation, with some tests carried out in a physical robotic platform. The obtained results show that the proposed approaches are able to outperform the classical Fitted Q Iteration.
Resumo:
Numerous studies of the dual-mode scramjet isolator, a critical component in preventing inlet unstart and/or vehicle loss by containing a collection of flow disturbances called a shock train, have been performed since the dual-mode propulsion cycle was introduced in the 1960s. Low momentum corner flow and other three-dimensional effects inherent to rectangular isolators have, however, been largely ignored in experimental studies of the boundary layer separation driven isolator shock train dynamics. Furthermore, the use of two dimensional diagnostic techniques in past works, be it single-perspective line-of-sight schlieren/shadowgraphy or single axis wall pressure measurements, have been unable to resolve the three-dimensional flow features inside the rectangular isolator. These flow characteristics need to be thoroughly understood if robust dual-mode scramjet designs are to be fielded. The work presented in this thesis is focused on experimentally analyzing shock train/boundary layer interactions from multiple perspectives in aspect ratio 1.0, 3.0, and 6.0 rectangular isolators with inflow Mach numbers ranging from 2.4 to 2.7. Secondary steady-state Computational Fluid Dynamics studies are performed to compare to the experimental results and to provide additional perspectives of the flow field. Specific issues that remain unresolved after decades of isolator shock train studies that are addressed in this work include the three-dimensional formation of the isolator shock train front, the spatial and temporal low momentum corner flow separation scales, the transient behavior of shock train/boundary layer interaction at specific coordinates along the isolator's lateral axis, and effects of the rectangular geometry on semi-empirical relations for shock train length prediction. A novel multiplane shadowgraph technique is developed to resolve the structure of the shock train along both the minor and major duct axis simultaneously. It is shown that the shock train front is of a hybrid oblique/normal nature. Initial low momentum corner flow separation spawns the formation of oblique shock planes which interact and proceed toward the center flow region, becoming more normal in the process. The hybrid structure becomes more two-dimensional as aspect ratio is increased but corner flow separation precedes center flow separation on the order of 1 duct height for all aspect ratios considered. Additional instantaneous oil flow surface visualization shows the symmetry of the three-dimensional shock train front around the lower wall centerline. Quantitative synthetic schlieren visualization shows the density gradient magnitude approximately double between the corner oblique and center flow normal structures. Fast response pressure measurements acquired near the corner region of the duct show preliminary separation in the outer regions preceding centerline separation on the order of 2 seconds. Non-intrusive Focusing Schlieren Deflectometry Velocimeter measurements reveal that both shock train oscillation frequency and velocity component decrease as measurements are taken away from centerline and towards the side-wall region, along with confirming the more two dimensional shock train front approximation for higher aspect ratios. An updated modification to Waltrup \& Billig's original semi-empirical shock train length relation for circular ducts based on centerline pressure measurements is introduced to account for rectangular isolator aspect ratio, upstream corner separation length scale, and major- and minor-axis boundary layer momentum thickness asymmetry. The latter is derived both experimentally and computationally and it is shown that the major-axis (side-wall) boundary layer has lower momentum thickness compared to the minor-axis (nozzle bounded) boundary layer, making it more separable. Furthermore, it is shown that the updated correlation drastically improves shock train length prediction capabilities in higher aspect ratio isolators. This thesis suggests that performance analysis of rectangular confined supersonic flow fields can no longer be based on observations and measurements obtained along a single axis alone. Knowledge gained by the work performed in this study will allow for the development of more robust shock train leading edge detection techniques and isolator designs which can greatly mitigate the risk of inlet unstart and/or vehicle loss in flight.
Resumo:
The Graphical User Interface (GUI) is an integral component of contemporary computer software. A stable and reliable GUI is necessary for correct functioning of software applications. Comprehensive verification of the GUI is a routine part of most software development life-cycles. The input space of a GUI is typically large, making exhaustive verification difficult. GUI defects are often revealed by exercising parts of the GUI that interact with each other. It is challenging for a verification method to drive the GUI into states that might contain defects. In recent years, model-based methods, that target specific GUI interactions, have been developed. These methods create a formal model of the GUI’s input space from specification of the GUI, visible GUI behaviors and static analysis of the GUI’s program-code. GUIs are typically dynamic in nature, whose user-visible state is guided by underlying program-code and dynamic program-state. This research extends existing model-based GUI testing techniques by modelling interactions between the visible GUI of a GUI-based software and its underlying program-code. The new model is able to, efficiently and effectively, test the GUI in ways that were not possible using existing methods. The thesis is this: Long, useful GUI testcases can be created by examining the interactions between the GUI, of a GUI-based application, and its program-code. To explore this thesis, a model-based GUI testing approach is formulated and evaluated. In this approach, program-code level interactions between GUI event handlers will be examined, modelled and deployed for constructing long GUI testcases. These testcases are able to drive the GUI into states that were not possible using existing models. Implementation and evaluation has been conducted using GUITAR, a fully-automated, open-source GUI testing framework.
Resumo:
Physical places are given contextual meaning by the objects and people that make up the space. Presence in physical places can be utilised to support mobile interaction by making access to media and notifications on a smartphone easier and more visible to other people. Smartphone interfaces can be extended into the physical world in a meaningful way by anchoring digital content to artefacts, and interactions situated around physical artefacts can provide contextual meaning to private manipulations with a mobile device. Additionally, places themselves are designed to support a set of tasks, and the logical structure of places can be used to organise content on the smartphone. Menus that adapt the functionality of a smartphone can support the user by presenting the tools most likely to be needed just-in-time, so that information needs can be satisfied quickly and with little cognitive effort. Furthermore, places are often shared with people whom the user knows, and the smartphone can facilitate social situations by providing access to content that stimulates conversation. However, the smartphone can disrupt a collaborative environment, by alerting the user with unimportant notifications, or sucking the user in to the digital world with attractive content that is only shown on a private screen. Sharing smartphone content on a situated display creates an inclusive and unobtrusive user experience, and can increase focus on a primary task by allowing content to be read at a glance. Mobile interaction situated around artefacts of personal places is investigated as a way to support users to access content from their smartphone while managing their physical presence. A menu that adapts to personal places is evaluated to reduce the time and effort of app navigation, and coordinating smartphone content on a situated display is found to support social engagement and the negotiation of notifications. Improving the sensing of smartphone users in places is a challenge that is out-with the scope of this thesis. Instead, interaction designers and developers should be provided with low-cost positioning tools that utilise presence in places, and enable quantitative and qualitative data to be collected in user evaluations. Two lightweight positioning tools are developed with the low-cost sensors that are currently available: The Microsoft Kinect depth sensor allows movements of a smartphone user to be tracked in a limited area of a place, and Bluetooth beacons enable the larger context of a place to be detected. Positioning experiments with each sensor are performed to highlight the capabilities and limitations of current sensing techniques for designing interactions with a smartphone. Both tools enable prototypes to be built with a rapid prototyping approach, and mobile interactions can be tested with more advanced sensing techniques as they become available. Sensing technologies are becoming pervasive, and it will soon be possible to perform reliable place detection in-the-wild. Novel interactions that utilise presence in places can support smartphone users by making access to useful functionality easy and more visible to the people who matter most in everyday life.
Resumo:
The business philosophy of Mass Customisation (MC) implies rapid response to customer requests, high efficiency and limited cost overheads of customisation. Furthermore, it also implies the quality benefits of the mass production paradigm are guaranteed. However, traditional quality science in manufacturing is premised on volume production of uniform products rather than of differentiated products associated with MC. This creates quality challenges and raises questions over the suitability of standard quality engineering techniques. From an analysis of relevant MC and quality literature it is argued the aims of MC are aligned with contemporary thinking on quality and that quality concepts provide insights into MC. Quality issues are considered along three dimensions - product development, order fulfilment and customer interaction. The applicability and effectiveness of conventional quality engineering techniques are discussed and a framework is presented which identifies key issues with respect to quality for a spectrum of MC strategies.
Resumo:
[EN] Since Long's Interaction Hypothesis (Long, 1983) multiple studies have suggested the need of oral interaction for successful second language learning. Within this perspective, a great deal of research has been carried out to investigate the role of corrective feedback in the process of acquiring a second language, but there are still varied open debates about this issue. This comparative study seeks to contribute to the existing literature on corrective feedback in oral interaction by exploring teachers' corrective techniques and students' response to these corrections. Two learning contexts were observed and compared: a traditional English as a foreign language (EFL) classroom and a Content and Language Integrated Learning (CLIL) classroom .The main aim was to see whether our data conform to the Counterbalance Hypothesis proposed by Lyster and Mori (2006). Although results did not show significant differences between the two contexts, a qualitative analysis of the data shed some light on the differences between these two language teaching settings. The findings point to the need for further research on error correction in EFL and CLIL contexts in order to overcome the limitations of the present study.
Resumo:
Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.