940 resultados para Rough interfaces


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Road curves are an important feature of road infrastructure and many serious crashes occur on road curves. In Queensland, the number of fatalities is twice as many on curves as that on straight roads. Therefore, there is a need to reduce drivers’ exposure to crash risk on road curves. Road crashes in Australia and in the Organisation for Economic Co-operation and Development(OECD) have plateaued in the last five years (2004 to 2008) and the road safety community is desperately seeking innovative interventions to reduce the number of crashes. However, designing an innovative and effective intervention may prove to be difficult as it relies on providing theoretical foundation, coherence, understanding, and structure to both the design and validation of the efficiency of the new intervention. Researchers from multiple disciplines have developed various models to determine the contributing factors for crashes on road curves with a view towards reducing the crash rate. However, most of the existing methods are based on statistical analysis of contributing factors described in government crash reports. In order to further explore the contributing factors related to crashes on road curves, this thesis designs a novel method to analyse and validate these contributing factors. The use of crash claim reports from an insurance company is proposed for analysis using data mining techniques. To the best of our knowledge, this is the first attempt to use data mining techniques to analyse crashes on road curves. Text mining technique is employed as the reports consist of thousands of textual descriptions and hence, text mining is able to identify the contributing factors. Besides identifying the contributing factors, limited studies to date have investigated the relationships between these factors, especially for crashes on road curves. Thus, this study proposed the use of the rough set analysis technique to determine these relationships. The results from this analysis are used to assess the effect of these contributing factors on crash severity. The findings obtained through the use of data mining techniques presented in this thesis, have been found to be consistent with existing identified contributing factors. Furthermore, this thesis has identified new contributing factors towards crashes and the relationships between them. A significant pattern related with crash severity is the time of the day where severe road crashes occur more frequently in the evening or night time. Tree collision is another common pattern where crashes that occur in the morning and involves hitting a tree are likely to have a higher crash severity. Another factor that influences crash severity is the age of the driver. Most age groups face a high crash severity except for drivers between 60 and 100 years old, who have the lowest crash severity. The significant relationship identified between contributing factors consists of the time of the crash, the manufactured year of the vehicle, the age of the driver and hitting a tree. Having identified new contributing factors and relationships, a validation process is carried out using a traffic simulator in order to determine their accuracy. The validation process indicates that the results are accurate. This demonstrates that data mining techniques are a powerful tool in road safety research, and can be usefully applied within the Intelligent Transport System (ITS) domain. The research presented in this thesis provides an insight into the complexity of crashes on road curves. The findings of this research have important implications for both practitioners and academics. For road safety practitioners, the results from this research illustrate practical benefits for the design of interventions for road curves that will potentially help in decreasing related injuries and fatalities. For academics, this research opens up a new research methodology to assess crash severity, related to road crashes on curves.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transition metal oxides are functional materials that have advanced applications in many areas, because of their diverse properties (optical, electrical, magnetic, etc.), hardness, thermal stability and chemical resistance. Novel applications of the nanostructures of these oxides are attracting significant interest as new synthesis methods are developed and new structures are reported. Hydrothermal synthesis is an effective process to prepare various delicate structures of metal oxides on the scales from a few to tens of nanometres, specifically, the highly dispersed intermediate structures which are hardly obtained through pyro-synthesis. In this thesis, a range of new metal oxide (stable and metastable titanate, niobate) nanostructures, namely nanotubes and nanofibres, were synthesised via a hydrothermal process. Further structure modifications were conducted and potential applications in catalysis, photocatalysis, adsorption and construction of ceramic membrane were studied. The morphology evolution during the hydrothermal reaction between Nb2O5 particles and concentrated NaOH was monitored. The study demonstrates that by optimising the reaction parameters (temperature, amount of reactants), one can obtain a variety of nanostructured solids, from intermediate phases niobate bars and fibres to the stable phase cubes. Trititanate (Na2Ti3O7) nanofibres and nanotubes were obtained by the hydrothermal reaction between TiO2 powders or a titanium compound (e.g. TiOSO4·xH2O) and concentrated NaOH solution by controlling the reaction temperature and NaOH concentration. The trititanate possesses a layered structure, and the Na ions that exist between the negative charged titanate layers are exchangeable with other metal ions or H+ ions. The ion-exchange has crucial influence on the phase transition of the exchanged products. The exchange of the sodium ions in the titanate with H+ ions yields protonated titanate (H-titanate) and subsequent phase transformation of the H-titanate enable various TiO2 structures with retained morphology. H-titanate, either nanofibres or tubes, can be converted to pure TiO2(B), pure anatase, mixed TiO2(B) and anatase phases by controlled calcination and by a two-step process of acid-treatment and subsequent calcination. While the controlled calcination of the sodium titanate yield new titanate structures (metastable titanate with formula Na1.5H0.5Ti3O7, with retained fibril morphology) that can be used for removal of radioactive ions and heavy metal ions from water. The structures and morphologies of the metal oxides were characterised by advanced techniques. Titania nanofibres of mixed anatase and TiO2(B) phases, pure anatase and pure TiO2(B) were obtained by calcining H-titanate nanofibres at different temperatures between 300 and 700 °C. The fibril morphology was retained after calcination, which is suitable for transmission electron microscopy (TEM) analysis. It has been found by TEM analysis that in mixed-phase structure the interfaces between anatase and TiO2(B) phases are not random contacts between the engaged crystals of the two phases, but form from the well matched lattice planes of the two phases. For instance, (101) planes in anatase and (101) planes of TiO2(B) are similar in d spaces (~0.18 nm), and they join together to form a stable interface. The interfaces between the two phases act as an one-way valve that permit the transfer of photogenerated charge from anatase to TiO2(B). This reduces the recombination of photogenerated electrons and holes in anatase, enhancing the activity for photocatalytic oxidation. Therefore, the mixed-phase nanofibres exhibited higher photocatalytic activity for degradation of sulforhodamine B (SRB) dye under ultraviolet (UV) light than the nanofibres of either pure phase alone, or the mechanical mixtures (which have no interfaces) of the two pure phase nanofibres with a similar phase composition. This verifies the theory that the difference between the conduction band edges of the two phases may result in charge transfer from one phase to the other, which results in effectively the photogenerated charge separation and thus facilitates the redox reaction involving these charges. Such an interface structure facilitates charge transfer crossing the interfaces. The knowledge acquired in this study is important not only for design of efficient TiO2 photocatalysts but also for understanding the photocatalysis process. Moreover, the fibril titania photocatalysts are of great advantage when they are separated from a liquid for reuse by filtration, sedimentation, or centrifugation, compared to nanoparticles of the same scale. The surface structure of TiO2 also plays a significant role in catalysis and photocatalysis. Four types of large surface area TiO2 nanotubes with different phase compositions (labelled as NTA, NTBA, NTMA and NTM) were synthesised from calcination and acid treatment of the H-titanate nanotubes. Using the in situ FTIR emission spectrescopy (IES), desorption and re-adsorption process of surface OH-groups on oxide surface can be trailed. In this work, the surface OH-group regeneration ability of the TiO2 nanotubes was investigated. The ability of the four samples distinctively different, having the order: NTA > NTBA > NTMA > NTM. The same order was observed for the catalytic when the samples served as photocatalysts for the decomposition of synthetic dye SRB under UV light, as the supports of gold (Au) catalysts (where gold particles were loaded by a colloid-based method) for photodecomposition of formaldehyde under visible light and for catalytic oxidation of CO at low temperatures. Therefore, the ability of TiO2 nanotubes to generate surface OH-groups is an indicator of the catalytic activity. The reason behind the correlation is that the oxygen vacancies at bridging O2- sites of TiO2 surface can generate surface OH-groups and these groups facilitate adsorption and activation of O2 molecules, which is the key step of the oxidation reactions. The structure of the oxygen vacancies at bridging O2- sites is proposed. Also a new mechanism for the photocatalytic formaldehyde decomposition with the Au-TiO2 catalysts is proposed: The visible light absorbed by the gold nanoparticles, due to surface plasmon resonance effect, induces transition of the 6sp electrons of gold to high energy levels. These energetic electrons can migrate to the conduction band of TiO2 and are seized by oxygen molecules. Meanwhile, the gold nanoparticles capture electrons from the formaldehyde molecules adsorbed on them because of gold’s high electronegativity. O2 adsorbed on the TiO2 supports surface are the major electron acceptor. The more O2 adsorbed, the higher the oxidation activity of the photocatalyst will exhibit. The last part of this thesis demonstrates two innovative applications of the titanate nanostructures. Firstly, trititanate and metastable titanate (Na1.5H0.5Ti3O7) nanofibres are used as intelligent absorbents for removal of radioactive cations and heavy metal ions, utilizing the properties of the ion exchange ability, deformable layered structure, and fibril morphology. Environmental contamination with radioactive ions and heavy metal ions can cause a serious threat to the health of a large part of the population. Treatment of the wastes is needed to produce a waste product suitable for long-term storage and disposal. The ion-exchange ability of layered titanate structure permitted adsorption of bivalence toxic cations (Sr2+, Ra2+, Pb2+) from aqueous solution. More importantly, the adsorption is irreversible, due to the deformation of the structure induced by the strong interaction between the adsorbed bivalent cations and negatively charged TiO6 octahedra, and results in permanent entrapment of the toxic bivalent cations in the fibres so that the toxic ions can be safely deposited. Compared to conventional clay and zeolite sorbents, the fibril absorbents are of great advantage as they can be readily dispersed into and separated from a liquid. Secondly, new generation membranes were constructed by using large titanate and small ã-alumina nanofibres as intermediate and top layers, respectively, on a porous alumina substrate via a spin-coating process. Compared to conventional ceramic membranes constructed by spherical particles, the ceramic membrane constructed by the fibres permits high flux because of the large porosity of their separation layers. The voids in the separation layer determine the selectivity and flux of a separation membrane. When the sizes of the voids are similar (which means a similar selectivity of the separation layer), the flux passing through the membrane increases with the volume of the voids which are filtration passages. For the ideal and simplest texture, a mesh constructed with the nanofibres 10 nm thick and having a uniform pore size of 60 nm, the porosity is greater than 73.5 %. In contrast, the porosity of the separation layer that possesses the same pore size but is constructed with metal oxide spherical particles, as in conventional ceramic membranes, is 36% or less. The membrane constructed by titanate nanofibres and a layer of randomly oriented alumina nanofibres was able to filter out 96.8% of latex spheres of 60 nm size, while maintaining a high flux rate between 600 and 900 Lm–2 h–1, more than 15 times higher than the conventional membrane reported in the most recent study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose to design a Custom Learning System that responds to the unique needs and potentials of individual students, regardless of their location, abilities, attitudes, and circumstances. This project is intentionally provocative and future-looking but it is not unrealistic or unfeasible. We propose that by combining complex learning databases with a learner’s personal data, we could provide all students with a personal, customizable, and flexible education. This paper presents the initial research undertaken for this project of which the main challenges were to broadly map the complex web of data available, to identify what logic models are required to make the data meaningful for learning, and to translate this knowledge into simple and easy-to-use interfaces. The ultimate outcome of this research will be a series of candidate user interfaces and a broad system logic model for a new smart system for personalized learning. This project is student-centered, not techno-centric, aiming to deliver innovative solutions for learners and schools. It is deliberately future-looking, allowing us to ask questions that take us beyond the limitations of today to motivate new demands on technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent research on particle size distributions and particle concentrations near a busy road cannot be explained by the conventional mechanisms for particle evolution of combustion aerosols. Specifically they appear to be inadequate to explain the experimental observations of particle transformation and the evolution of the total number concentration. This resulted in the development of a new mechanism based on their thermal fragmentation, for the evolution of combustion aerosol nano-particles. A complex and comprehensive pattern of evolution of combustion aerosols, involving particle fragmentation, was then proposed and justified. In that model it was suggested that thermal fragmentation occurs in aggregates of primary particles each of which contains a solid graphite/carbon core surrounded by volatile molecules bonded to the core by strong covalent bonds. Due to the presence of strong covalent bonds between the core and the volatile (frill) molecules, such primary composite particles can be regarded as solid, despite the presence of significant (possibly, dominant) volatile component. Fragmentation occurs when weak van der Waals forces between such primary particles are overcome by their thermal (Brownian) motion. In this work, the accepted concept of thermal fragmentation is advanced to determine whether fragmentation is likely in liquid composite nano-particles. It has been demonstrated that at least at some stages of evolution, combustion aerosols contain a large number of composite liquid particles containing presumably several components such as water, oil, volatile compounds, and minerals. It is possible that such composite liquid particles may also experience thermal fragmentation and thus contribute to, for example, the evolution of the total number concentration as a function of distance from the source. Therefore, the aim of this project is to examine theoretically the possibility of thermal fragmentation of composite liquid nano-particles consisting of immiscible liquid v components. The specific focus is on ternary systems which include two immiscible liquid droplets surrounded by another medium (e.g., air). The analysis shows that three different structures are possible, the complete encapsulation of one liquid by the other, partial encapsulation of the two liquids in a composite particle, and the two droplets separated from each other. The probability of thermal fragmentation of two coagulated liquid droplets is discussed and examined for different volumes of the immiscible fluids in a composite liquid particle and their surface and interfacial tensions through the determination of the Gibbs free energy difference between the coagulated and fragmented states, and comparison of this energy difference with the typical thermal energy kT. The analysis reveals that fragmentation was found to be much more likely for a partially encapsulated particle than a completely encapsulated particle. In particular, it was found that thermal fragmentation was much more likely when the volume ratio of the two liquid droplets that constitute the composite particle are very different. Conversely, when the two liquid droplets are of similar volumes, the probability of thermal fragmentation is small. It is also demonstrated that the Gibbs free energy difference between the coagulated and fragmented states is not the only important factor determining the probability of thermal fragmentation of composite liquid particles. The second essential factor is the actual structure of the composite particle. It is shown that the probability of thermal fragmentation is also strongly dependent on the distance that each of the liquid droplets should travel to reach the fragmented state. In particular, if this distance is larger than the mean free path for the considered droplets in the air, the probability of thermal fragmentation should be negligible. In particular, it follows form here that fragmentation of the composite particle in the state with complete encapsulation is highly unlikely because of the larger distance that the two droplets must travel in order to separate. The analysis of composite liquid particles with the interfacial parameters that are expected in combustion aerosols demonstrates that thermal fragmentation of these vi particles may occur, and this mechanism may play a role in the evolution of combustion aerosols. Conditions for thermal fragmentation to play a significant role (for aerosol particles other than those from motor vehicle exhaust) are determined and examined theoretically. Conditions for spontaneous transformation between the states of composite particles with complete and partial encapsulation are also examined, demonstrating the possibility of such transformation in combustion aerosols. Indeed it was shown that for some typical components found in aerosols that transformation could take place on time scales less than 20 s. The analysis showed that factors that influenced surface and interfacial tension played an important role in this transformation process. It is suggested that such transformation may, for example, result in a delayed evaporation of composite particles with significant water component, leading to observable effects in evolution of combustion aerosols (including possible local humidity maximums near a source, such as a busy road). The obtained results will be important for further development and understanding of aerosol physics and technologies, including combustion aerosols and their evolution near a source.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous work has shown that amplitude and direction are two independently controlled parameters of aimed arm movements, and performance, therefore, suffers when they must be decomposed into Cartesian coordinates. We now compare decomposition into different coordinate systems. Subjects pointed at visual targets in 2-D with a cursor, using a two-axis joystick or two single-axis joysticks. In the latter case, joystick axes were aligned with the subjects’ body axes, were rotated by –45°, or were oblique (i.e., one axis was in an egocentric frame and the other was rotated by –45°). Cursor direction always corresponded to joystick direction. We found that compared with the two-axis joystick, responses with single-axis joysticks were slower and less accurate when the axes were oriented egocentrically; the deficit was even more pronounced when the axes were rotated and was most pronounced when they were oblique. This confirms that decomposition of motor commands is computationally demanding and documents that this demand is lowest for egocentric, higher for rotated, and highest for oblique coordinates. We conclude that most current vehicles use computationally demanding man–machine interfaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Process modeling is a complex organizational task that requires many iterations and communication between the business analysts and the domain specialists involved in the process modeling. The challenge of process modeling is exacerbated, when the process of modeling has to be performed in a cross-organizational, distributed environment. Some systems have been developed to support collaborative process modeling, all of which use traditional 2D interfaces. We present an environment for collaborative process modeling, using 3D virtual environment technology. We make use of avatar instantiations of user ego centres, to allow for the spatial embodiment of the user with reference to the process model. We describe an innovative prototype collaborative process modeling approach, implemented as a modeling environment in Second Life. This approach leverages the use of virtual environments to provide user context for editing and collaborative exercises. We present a positive preliminary report on a case study, in which a test group modelled a business process using the system in Second Life.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Virtual 3D models of long bones are increasingly being used for implant design and research applications. The current gold standard for the acquisition of such data is Computed Tomography (CT) scanning. Due to radiation exposure, CT is generally limited to the imaging of clinical cases and cadaver specimens. Magnetic Resonance Imaging (MRI) does not involve ionising radiation and therefore can be used to image selected healthy human volunteers for research purposes. The feasibility of MRI as alternative to CT for the acquisition of morphological bone data of the lower extremity has been demonstrated in recent studies [1, 2]. Some of the current limitations of MRI are long scanning times and difficulties with image segmentation in certain anatomical regions due to poor contrast between bone and surrounding muscle tissues. Higher field strength scanners promise to offer faster imaging times or better image quality. In this study image quality at 1.5T is quantitatively compared to images acquired at 3T. --------- The femora of five human volunteers were scanned using 1.5T and 3T MRI scanners from the same manufacturer (Siemens) with similar imaging protocols. A 3D flash sequence was used with TE = 4.66 ms, flip angle = 15° and voxel size = 0.5 × 0.5 × 1 mm. PA-Matrix and body matrix coils were used to cover the lower limb and pelvis respectively. Signal to noise ratio (SNR) [3] and contrast to noise ratio (CNR) [3] of the axial images from the proximal, shaft and distal regions were used to assess the quality of images from the 1.5T and 3T scanners. The SNR was calculated for the muscle and bone-marrow in the axial images. The CNR was calculated for the muscle to cortex and cortex to bone marrow interfaces, respectively. --------- Preliminary results (one volunteer) show that the SNR of muscle for the shaft and distal regions was higher in 3T images (11.65 and 17.60) than 1.5T images (8.12 and 8.11). For the proximal region the SNR of muscles was higher in 1.5T images (7.52) than 3T images (6.78). The SNR of bone marrow was slightly higher in 1.5T images for both proximal and shaft regions, while it was lower in the distal region compared to 3T images. The CNR between muscle and bone of all three regions was higher in 3T images (4.14, 6.55 and 12.99) than in 1.5T images (2.49, 3.25 and 9.89). The CNR between bone-marrow and bone was slightly higher in 1.5T images (4.87, 12.89 and 10.07) compared to 3T images (3.74, 10.83 and 10.15). These results show that the 3T images generated higher contrast between bone and the muscle tissue than the 1.5T images. It is expected that this improvement of image contrast will significantly reduce the time required for the mainly manual segmentation of the MR images. Future work will focus on optimizing the 3T imaging protocol for reducing chemical shift and susceptibility artifacts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Non-driving related cognitive load and variations of emotional state may impact a driver’s capability to control a vehicle and introduces driving errors. Availability of reliable cognitive load and emotion detection in drivers would benefit the design of active safety systems and other intelligent in-vehicle interfaces. In this study, speech produced by 68 subjects while driving in urban areas is analyzed. A particular focus is on speech production differences in two secondary cognitive tasks, interactions with a co-driver and calls to automated spoken dialog systems (SDS), and two emotional states during the SDS interactions - neutral/negative. A number of speech parameters are found to vary across the cognitive/emotion classes. Suitability of selected cepstral- and production-based features for automatic cognitive task/emotion classification is investigated. A fusion of GMM/SVM classifiers yields an accuracy of 94.3% in cognitive task and 81.3% in emotion classification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Frontline employees constitute one of the key interfaces that service organisations have with their markets. Many strategies to enhance the ability of these employees to satisfy the needs of customers have been proposed. Amongst these, empowering employees has been suggested to enhance the customer orientation of the firm and consequently its effectiveness in serving the market. However, the impact of empowerment in service organisations remains somewhat contentious. This paper examines the role of empowerments an organisational service strategy and identifies its consequences for role stress, job satisfaction and the willingness of service employees to serve their customers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fracture behavior of Cu-Ni laminate composites has been investigated by tensile testing. It was found that as the individual layer thickness decreases from 100 to 20nm, the resultant fracture angle of the Cu-Ni laminate changes from 72 degrees to 50 degrees. Cross-sectional observations reveal that the fracture of the Ni layers transforms from opening to shear mode as the layer thickness decreases while that of the Cu layers keeps shear mode. Competition mechanisms were proposed to understand the variation in fracture mode of the metallic laminate composites associated with length scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we discuss how a network of sensors and robots can cooperate to solve important robotics problems such as localization and navigation. We use a robot to localize sensor nodes, and we then use these localized nodes to navigate robots and humans through the sensorized space. We explore these novel ideas with results from two large-scale sensor network and robot experiments involving 50 motes, two types of flying robot: an autonomous helicopter and a large indoor cable array robot, and a human-network interface. We present the distributed algorithms for localization, geographic routing, path definition and incremental navigation. We also describe how a human can be guided using a simple hand-held device that interfaces to this same environmental infrastructure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interactive documents for use with the World Wide Web have been developed for viewing multi-dimensional radiographic and visual images of human anatomy, derived from the Visible Human Project. Emphasis has been placed on user-controlled features and selections. The purpose was to develop an interface which was independent of host operating system and browser software which would allow viewing of information by multiple users. The interfaces were implemented using HyperText Markup Language (HTML) forms, C programming language and Perl scripting language. Images were pre-processed using ANALYZE and stored on a Web server in CompuServe GIF format. Viewing options were included in the document design, such as interactive thresholding and two-dimensional slice direction. The interface is an example of what may be achieved using the World Wide Web. Key applications envisaged for such software include education, research and accessing of information through internal databases and simultaneous sharing of images by remote computers by health personnel for diagnostic purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Component software has many benefits, most notably increased software re-use; however, the component software process places heavy burdens on programming language technology, which modern object-oriented programming languages do not address. In particular, software components require specifications that are both sufficiently expressive and sufficiently abstract, and, where possible, these specifications should be checked formally by the programming language. This dissertation presents a programming language called Mentok that provides two novel programming language features enabling improved specification of stateful component roles. Negotiable interfaces are interface types extended with protocols, and allow specification of changing method availability, including some patterns of out-calls and re-entrance. Type layers are extensions to module signatures that allow specification of abstract control flow constraints through the interfaces of a component-based application. Development of Mentok's unique language features included creation of MentokC, the Mentok compiler, and formalization of key properties of Mentok in mini-languages called MentokP and MentokL.