891 resultados para urban interaction design
Resumo:
Wilbur Zelinsky formulated a Hypothesis of Mobility Transition in 1971,in which he tried to relate all aspects of mobility to the Demographic Transition and modernisation. This dissertation applies the theoretical framework, proposed by Zelinsky and extended to encompass a family of transitions, to understand migration patterns of city regions. The two city regions, Brisbane and Stockholm, are selected as case studies, representing important city regions of similar size, but drawn from contrasting historical settings. A comparison of the case studies with the theoretical framework aims to determine how the relative contributions of net migration, the source areas of migrants, and the migration intensity change with modernisation. In addition, the research also aims to identify aspects of modernisation affecting migration. These aspects of migration are analysed with a "historical approach" and a "multivariate approach". An extensive investigation into the city regions' historical background provides the source, from which evidence for a relationship between migration and modernisation is extracted. With this historical approach, similarities and differences in migration patterns are identified. The other research approach analyse multivariate data, from the last two decades, on migration flows and modernisation. Correlations between migration and key aspects of modernisation are tested with multivariate regression, based on an alternative version of a spatial interaction model. The project demonstrates that the changing functions of cities and the structural modernisation are influential on migration. Similar patterns are found, regarding the relative contributions of net migration and natural increase to population growth. The research finds links between these changes in the relative contribution of net migration and demographic modernisation. The findings on variations in urban and rural source areas of migrants to city regions do not contradict the expected pattern, but data limitations prevent definite conclusion to be drawn. The assessment of variations in migration intensity resulted in the expected pattern not being supported. Based on Swedish data, the hypothesised increase in migration intensity is rejected. Interactional migration data also show patterns different from those derived from the theoretical framework. The findings, from both research approaches, suggested that structural modernisation affected migration flows more than demographic modernisation. The findings lead to a formulation of hypothesised patterns for migration to city regions. The study provides an important research contribution by applying the two research approaches to city regions. It also combines the study of internal and international migration to address the research objectives within a framework of transitional change.
Resumo:
Recent studies of new institutional spaces typically underplay the uneven and contested process of institutional change by undervaluing the role of inherited institutions and discourses. This is a critical issue as neoliberal networked forms of governance interact with inherited institutional arrangements, characterised by important path dependencies that guide actors. Contradiction and tensions can emerge, culminating in crisis tendencies, and producing both discursive and material contestation between actors. It is with an understanding of path dependencies, ideas (structured into discourses), and (perceived and actual) crisis tendencies that this paper examines contested institutional change through a case-study analysis of one city, and a critical engagement with neoinstitutionalism. The purpose is to examine, firstly, the significance of inherited path-dependent arrangements in fostering conflict and crisis tendencies during interaction with emergent state action; secondly, the extent to which crisis is evident in processes of institutional change and the form that this takes; and, thirdly, the importance of ideas in producing institutional transformation. It is found that institutional conflict is evident between inherited institutions and emergent state action, and stems both from the way agents are organised by the state and from certain path dependencies, but that this does not lead to an actual material crisis. Rather, the nation-state, in partnership with senior city government actors, use ideational/discursive ‘crisis talk’ as a means by which to induce institutional change. The role of ideas has been in critical in this process as the nation-state frames problems and solutions in line with its existing policy paradigm and institutional arrangements, and with discourses further reinforcing existing material power relations.
Resumo:
A new surface analysis technique has been developed which has a number of benefits compared to conventional Low Energy Ion Scattering Spectrometry (LEISS). A major potential advantage arising from the absence of charge exchange complications is the possibility of quantification. The instrumentation that has been developed also offers the possibility of unique studies concerning the interaction between low energy ions and atoms and solid surfaces. From these studies it may also be possible, in principle, to generate sensitivity factors to quantify LEISS data. The instrumentation, which is referred to as a Time-of-Flight Fast Atom Scattering Spectrometer has been developed to investigate these conjecture in practice. The development, involved a number of modifications to an existing instrument, and allowed samples to be bombarded with a monoenergetic pulsed beam of either atoms or ions, and provided the capability to analyse the spectra of scattered atoms and ions separately. Further to this a system was designed and constructed to allow incident, exit and azimuthal angles of the particle beam to be varied independently. The key development was that of a pulsed, and mass filtered atom source; which was developed by a cyclic process of design, modelling and experimentation. Although it was possible to demonstrate the unique capabilities of the instrument, problems relating to surface contamination prevented the measurement of the neutralisation probabilities. However, these problems appear to be technical rather than scientific in nature, and could be readily resolved given the appropriate resources. Experimental spectra obtained from a number of samples demonstrate some fundamental differences between the scattered ion and neutral spectra. For practical non-ordered surfaces the ToF spectra are more complex than their LEISS counterparts. This is particularly true for helium scattering where it appears, in the absence of detailed computer simulation, that quantitative analysis is limited to ordered surfaces. Despite this limitation the ToFFASS instrument opens the way for quantitative analysis of the 'true' surface region to a wider range of surface materials.
Resumo:
Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.
Resumo:
The design and synthesis of biomaterials covers a growing number of biomedical applications. The use of biomaterials in biological environment is associated with a number of problems, the most important of which is biocompatabUity. If the implanted biomaterial is not compatible with the environment, it will be rejected by the biological site. This may be manifested in many ways depending on the environment in which it is used. Adsorption of proteins takes place almost instantaneously when a biomaterial comes into contact with most biological fluids. The eye is a unique body site for the study of protein interactions with biomaterials, because of its ease of access and deceptive complexity of the tears. The use of contact lenses for either vision correction and cosmetic reasons or as a route for the controlled drug delivery, has significantly increased in recent years. It is relatively easy to introduce a contact lens Into the tear fluid and remove after a few minutes without surgery or trauma to the patient. A range of analytical techniques were used and developed to measure the proteins absorbed to some existing commercial contact lens materials and also to novel hydrogels synthesised within the research group. Analysis of the identity and quantity of proteins absorbed to biomaterials revealed the importance of many factors on the absorption process. The effect of biomaterial structure, protein nature in terms of size. shape and charge and pH of the environment on the absorption process were examined in order to determine the relative up-take of tear proteins. This study showed that both lysozyme and lactoferrin penetrate the lens matrix of ionic materials. Measurement of the mobility and activity of the protein deposited into the surface and within the matrix of ionic lens materials demonstrated that the mobility is pH dependent and, within the experimental errors, the biological activity of lysozyme remained unchanged after adsorption and desorption. The study on the effect of different monomers copolymerised with hydroxyethyl methacrylate (HEMA) on the protein up-take showed that monomers producing a positive charge on the copolymer can reduce the spoilation with lysozyme. The studies were extended to real cases in order to compare the patient dependent factors. The in-vivo studies showed that the spoilation is patient dependent as well as other factors. Studies on the extrinsic factors such as dye used in colour lenses showed that the addition of colourant affects protein absorption and, in one case, its effect is beneficial to the wearer as it reduces the quantity of the protein absorbed.
Resumo:
The development of increasingly powerful computers, which has enabled the use of windowing software, has also opened the way for the computer study, via simulation, of very complex physical systems. In this study, the main issues related to the implementation of interactive simulations of complex systems are identified and discussed. Most existing simulators are closed in the sense that there is no access to the source code and, even if it were available, adaptation to interaction with other systems would require extensive code re-writing. This work aims to increase the flexibility of such software by developing a set of object-oriented simulation classes, which can be extended, by subclassing, at any level, i.e., at the problem domain, presentation or interaction levels. A strategy, which involves the use of an object-oriented framework, concurrent execution of several simulation modules, use of a networked windowing system and the re-use of existing software written in procedural languages, is proposed. A prototype tool which combines these techniques has been implemented and is presented. It allows the on-line definition of the configuration of the physical system and generates the appropriate graphical user interface. Simulation routines have been developed for the chemical recovery cycle of a paper pulp mill. The application, by creation of new classes, of the prototype to the interactive simulation of this physical system is described. Besides providing visual feedback, the resulting graphical user interface greatly simplifies the interaction with this set of simulation modules. This study shows that considerable benefits can be obtained by application of computer science concepts to the engineering domain, by helping domain experts to tailor interactive tools to suit their needs.
Resumo:
This research investigates the general user interface problems in using networked services. Some of the problems are: users have to recall machine names and procedures to. invoke networked services; interactions with some of the services are by means of menu-based interfaces which are quite cumbersome to use; inconsistencies exist between the interfaces for different services because they were developed independently. These problems have to be removed so that users can use the services effectively. A prototype system has been developed to help users interact with networked services. This consists of software which gives the user an easy and consistent interface with the various services. The prototype is based on a graphical user interface and it includes the following appJications: Bath Information & Data Services; electronic mail; file editor. The prototype incorporates an online help facility to assist users using the system. The prototype can be divided into two parts: the user interface part that manages interactlon with the user; the communicatIon part that enables the communication with networked services to take place. The implementation is carried out using an object-oriented approach where both the user interface part and communication part are objects. The essential characteristics of object-orientation, - abstraction, encapsulation, inheritance and polymorphism - can all contribute to the better design and implementation of the prototype. The Smalltalk Model-View-Controller (MVC) methodology has been the framework for the construction of the prototype user interface. The purpose of the development was to study the effectiveness of users interaction to networked services. Having completed the prototype, tests users were requested to use the system to evaluate its effectiveness. The evaluation of the prototype is based on observation, i.e. observing the way users use the system and the opinion rating given by the users. Recommendations to improve further the prototype are given based on the results of the evaluation. based on the results of the evah:1ation. . .'. " "', ':::' ,n,<~;'.'
Resumo:
The widespread implementation of Manufacturing Resource Planning (MRPII) systems in this country and abroad and the reported dissatisfaction with their use formed the initial basis of this piece of research which concentrates on the fundamental theory and design of the Closed Loop MRPII system itself. The dissertation concentrates on two key aspects namely; how Master Production Scheduling is carried out in differing business environments and how well the `closing of the loop' operates by checking the capcity requirements of the different levels of plans within an organisation. The main hypothesis which is tested is that in U.K. manufacturing industry, resource checks are either not being carried out satisfactorily or they are not being fed back to the appropriate plan in a timely fashion. The research methodology employed involved initial detailed investigations into Master Scheduling and capacity planning in eight diverse manufacturing companies. This was followed by a nationwide survey of users in 349 companies, a survey of all the major suppliers of Production Management software in the U.K. and an analysis of the facilities offered by current software packages. The main conclusion which is drawn is that the hypothesis is proved in the majority of companies in that only just over 50% of companies are attempting Resource and Capacity Planning and only 20% are successfully feeding back CRP information to `close the loop'. Various causative factors are put forward and remedies are suggested.
Resumo:
An uptake system was developed using Caco-2 cell monolayers and the dipeptide, glycyl-[3H]L-proline, as a probe compound. Glycyl-[3H]L-proline uptake was via the di-/tripeptide transport system (DTS) and, exhibited concentration-, pH- and temperature-dependency. Dipeptides inhibited uptake of the probe, and the design of the system allowed competitors to be ranked against one another with respect to affinity for the transporter. The structural features required to ensure or increase interaction with the DTS were defined by studying the effect of a series of glycyl-L-proline and angiotensin-converting enzyme (ACE)-inhibitor (SQ-29852) analogues on the uptake of the probe. The SQ-29852 structure was divided into six domains (A-F) and competitors were grouped into series depending on structural variations within specific regions. Domain A was found to prefer a hydrophobic function, such as a phenyl group, and was intolerant to positive charges and H+ -acceptors and donors. SQ-29852 analogues were more tolerant of substitutions in the C domain, compared to glycyl-L-proline analogues, suggesting that interactions along the length of the SQ-29852 molecule may override the effects of substitutions in the C domain. SQ-29852 analogues showed a preference for a positive function, such as an amine group in this region, but dipeptide structures favoured an uncharged substitution. Lipophilic substituents in domain D increased affinity of SQ-29852 analogues with the DTS. A similar effect was observed for ACE-NEP inhibitor analogues. Domain E, corresponding to the carboxyl group was found to be tolerant of esterification for SQ-29852 analogues but not for dipeptides. Structural features which may increase interaction for one series of compounds, may not have the same effect for another series, indicating that the presence of multiple recognition sites on a molecule may override the deleterious effect of anyone change. Modifying current, poorly absorbed peptidomimetic structures to fit the proposed hypothetical model may improve oral bioavailability by increasing affinity for the DTS. The stereochemical preference of the transporter was explored using four series of compounds (SQ-29852, lysylproline, alanylproline and alanylalanine enantiomers). The L, L stereochemistry was the preferred conformation for all four series, agreeing with previous studies. However, D, D enantiomers were shown in some cases to be substrates for the DTS, although exhibiting a lower affinity than their L, L counterparts. All the ACE-inhibitors and β-lactam antibiotics investigated, produced a degree of inhibition of the probe, and thus show some affinity for the DTS. This contrasts with previous reports that found several ACE inhibitors to be absorbed via a passive process, thus suggesting that compounds are capable of binding to the transporter site and inhibiting the probe without being translocated into the cell. This was also shown to be the case for oligodeoxynucleotide conjugated to a lipophilic group (vitamin E), and highlights the possibility that other orally administered drug candidates may exert non-specific effects on the DTS and possibly have a nutritional impact. Molecular modelling of selected ACE-NEP inhibitors revealed that the three carbonyl functions can be oriented in a similar direction, and this conformation was found to exist in a local energy-minimised state, indicating that the carbonyls may possibly be involved in hydrogen-bond formation with the binding site of the DTS.
Resumo:
The work presented in this thesis is concerned with the heat transfer performance of a single horizontal bare tube and a variety of finned tubes immersed in a shallow air fluidized bed. Results of experimental investigations with the bare tube indicate that the tube position in the bed influences its performance narticularly where fine bed materials are used. In some cases the maximum heat transfer is obtained with the tube in the particle cloud just above the dense phase fluidized bed - a phenomenon that has not been previously observed. This was attributed to the unusual particle circulation in shallow beds. The data is also presented in dimensionless correlations which may be useful for design purposes. A close approximation to the bare tube data can be obtained by using thetransient heating of a spherical robe and this provides a valuable way of accumulating a lot of data very rapidly. The experimental data on finned tubes shows that a fin spacing less than twenty times the average particle diameter can cause a significant reduction in heat transfer due to the interaction which takes place between the particles and the surface of the fins. Furthermore, evidence is provided to show that particle shape plays an important part in the interaction with spherical particles being superior to angular particles at low fin spacing/particle diameter ratio. The finned tube data is less sensitive to tube position in the bed than bare tubes and the best performance is when the tube is positioned at the distributor.A reduction in bed depth decreases the thermal performance of the finned tube but in many practical installations the reduction in pressure drop might more than comnensate for the reduced heat flux. Information is also provided on the theoretical uerformance of fins and the effect of the root contact area between the fins and the tube was investigated.
Resumo:
Product reliability and its environmental performance have become critical elements within a product's specification and design. To obtain a high level of confidence in the reliability of the design it is customary to test the design under realistic conditions in a laboratory. The objective of the work is to examine the feasibility of designing mechanical test rigs which exhibit prescribed dynamical characteristics. The design is then attached to the rig and excitation is applied to the rig, which then transmits representative vibration levels into the product. The philosophical considerations made at the outset of the project are discussed as they form the basis for the resulting design methodologies. It is attempted to directly identify the parameters of a test rig from the spatial model derived during the system identification process. It is shown to be impossible to identify a feasible test rig design using this technique. A finite dimensional optimal design methodology is developed which identifies the parameters of a discrete spring/mass system which is dynamically similar to a point coordinate on a continuous structure. This design methodology is incorporated within another procedure which derives a structure comprising a continuous element and a discrete system. This methodology is used to obtain point coordinate similarity for two planes of motion, which is validated by experimental tests. A limitation of this approach is that it is impossible to achieve multi-coordinate similarity due to an interaction of the discrete system and the continuous element at points away from the coordinate of interest. During the work the importance of the continuous element is highlighted and a design methodology is developed for continuous structures. The design methodology is based upon distributed parameter optimal design techniques and allows an initial poor design estimate to be moved in a feasible direction towards an acceptable design solution. Cumulative damage theory is used to provide a quantitative method of assessing the quality of dynamic similarity. It is shown that the combination of modal analysis techniques and cumulative damage theory provides a feasible design synthesis methodology for representative test rigs.
Resumo:
This thesis is a theoretical study of the accuracy and usability of models that attempt to represent the environmental control system of buildings in order to improve environmental design. These models have evolved from crude representations of a building and its environment through to an accurate representation of the dynamic characteristics of the environmental stimuli on buildings. Each generation of models has had its own particular influence on built form. This thesis analyses the theory, structure and data of such models in terms of their accuracy of simulation and therefore their validity in influencing built form. The models are also analysed in terms of their compatability with the design process and hence their ability to aid designers. The conclusions are that such models are unlikely to improve environmental performance since: a the models can only be applied to a limited number of building types, b they can only be applied to a restricted number of the characteristics of a design, c they can only be employed after many major environmental decisions have been made, d the data used in models is inadequate and unrepresentative, e models do not account for occupant interaction in environmental control. It is argued that further improvements in the accuracy of simulation of environmental control will not significantly improve environmental design. This is based on the premise that strategic environmental decisions are made at the conceptual stages of design whereas models influence the detailed stages of design. It is hypothesised that if models are to improve environmental design it must be through the analysis of building typologies which provides a method of feedback between models and the conceptual stages of design. Field studies are presented to describe a method by which typologies can be analysed and a theoretical framework is described which provides a basis for further research into the implications of the morphology of buildings on environmental design.
Resumo:
Handheld and mobile technologies have witnessed significant advances in functionality, leading to their widespread use as both business and social networking tools. Human-Computer Interaction and Innovation in Handheld, Mobile and Wearable Technologies reviews concepts relating to the design, development, evaluation, and application of mobile technologies. Studies on mobile user interfaces, mobile learning, and mobile commerce contribute to the growing body of knowledge on this expanding discipline.
Resumo:
In recent years, mobile technology has been one of the major growth areas in computing. Designing the user interface for mobile applications, however, is a very complex undertaking which is made even more challenging by the rapid technological developments in mobile hardware. Mobile human-computer interaction, unlike desktop-based interaction, must be cognizant of a variety of complex contextual factors affecting both users and technology. The Handbook of Research on User Interface Design and Evaluation provides students, researchers, educators, and practitioners with a compendium of research on the key issues surrounding the design and evaluation of mobile user interfaces, such as the physical environment and social context in which a mobile device is being used and the impact of multitasking behavior typically exhibited by mobile-device users. Compiling the expertise of over 150 leading experts from 26 countries, this exemplary reference tool will make an indispensable addition to every library collection.
Resumo:
The calcitonin receptor-like receptor (CLR) acts as a receptor for the calcitonin gene-related peptide (CGRP) but in order to recognize CGRP, it must form a complex with an accessory protein, receptor activity modifying protein 1 (RAMP1). Identifying the protein/protein and protein/ligand interfaces in this unusual complex would aid drug design. The role of the extreme N-terminus of CLR (Glu23-Ala60) was examined by an alanine scan and the results were interpreted with the help of a molecular model. The potency of CGRP at stimulating cAMP production was reduced at Leu41Ala, Gln45Ala, Cys48Ala and Tyr49Ala; furthermore, CGRP-induced receptor internalization at all of these receptors was also impaired. Ile32Ala, Gly35Ala and Thr37Ala all increased CGRP potency. CGRP specific binding was abolished at Leu41Ala, Ala44Leu, Cys48Ala and Tyr49Ala. There was significant impairment of cell surface expression of Gln45Ala, Cys48Ala and Tyr49Ala. Cys48 takes part in a highly conserved disulfide bond and is probably needed for correct folding of CLR. The model suggests that Gln45 and Tyr49 mediate their effects by interacting with RAMP1 whereas Leu41 and Ala44 are likely to be involved in binding CGRP. Ile32, Gly35 and Thr37 form a separate cluster of residues which modulate CGRP binding. The results from this study may be applicable to other family B GPCRs which can associate with RAMPs.