387 resultados para inter-area oscillation frequency


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper explores how the design of creative clusters as a key strategy in promoting the urban creative economy has played out in Shanghai. Creative Clusters in Europe and North America context have emerged ‘organically’. They developed spontaneously in those cities which went through a period of post-industrial decline. Creative Industries grew up in these cities as part of a new urban economy in the wake of old manufacturing industries. Artists and creative entrepreneurs moved into vacant warehouses and factories and began the trend of ‘creative clusters’. Such clusters facilitate the transfer of tacit knowledge through informal learning, the efficient sourcing of skills and information, competition, collaboration and learning, inter-cluster trading and networking. This new urban phenomenon was soon targeted by local economic development policy in charge of re-generating and re-structuralizing industrial activities in cities. Rising interest from real estate and local economic development has led to more and more planned creative clusters. In the aim of catching up with the world’s creative cities, Shanghai has planned over 100 creative clusters since 2005. Along with these officially designed creative clusters, there are organically emerged creative clusters that are much smaller in scale and much more informal in terms of the management. And they emerged originally in old residential areas just outside the CBD and expand to include French concession the most sort after residential area at the edge of CBD. More recently, office buildings within CBD are made available for creative usages. From fringe to CBD, these organic creative clusters provide crucial evidences for the design of creative clusters. This paper will be organized in 2 parts. In the first part, I will present a case study of 8 ‘official’ clusters (title granted by local govenrment) in Shanghai through which I am hoping to develop some key indicators of the success/failure of creative clusters as well as link them with their physical, social and operational efficacies. In the second part, a variety of ‘alternative’ clusters (organicly formed clusters most of which are not recongnized by the government) supplies with us the possibilities of rethinking the so-called ‘cluster development strategy’ in terms of what kind of spaces are appropriate for use by clusters? Who should manage them and in what format? And ultimately what are their relationship with the rest of the city should be defined?

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Demands for delivering high instantaneous power in a compressed form (pulse shape) have widely increased during recent decades. The flexible shapes with variable pulse specifications offered by pulsed power have made it a practical and effective supply method for an extensive range of applications. In particular, the release of basic subatomic particles (i.e. electron, proton and neutron) in an atom (ionization process) and the synthesizing of molecules to form ions or other molecules are among those reactions that necessitate large amount of instantaneous power. In addition to the decomposition process, there have recently been requests for pulsed power in other areas such as in the combination of molecules (i.e. fusion, material joining), gessoes radiations (i.e. electron beams, laser, and radar), explosions (i.e. concrete recycling), wastewater, exhausted gas, and material surface treatments. These pulses are widely employed in the silent discharge process in all types of materials (including gas, fluid and solid); in some cases, to form the plasma and consequently accelerate the associated process. Due to this fast growing demand for pulsed power in industrial and environmental applications, the exigency of having more efficient and flexible pulse modulators is now receiving greater consideration. Sensitive applications, such as plasma fusion and laser guns also require more precisely produced repetitive pulses with a higher quality. Many research studies are being conducted in different areas that need a flexible pulse modulator to vary pulse features to investigate the influence of these variations on the application. In addition, there is the need to prevent the waste of a considerable amount of energy caused by the arc phenomena that frequently occur after the plasma process. The control over power flow during the supply process is a critical skill that enables the pulse supply to halt the supply process at any stage. Different pulse modulators which utilise different accumulation techniques including Marx Generators (MG), Magnetic Pulse Compressors (MPC), Pulse Forming Networks (PFN) and Multistage Blumlein Lines (MBL) are currently employed to supply a wide range of applications. Gas/Magnetic switching technologies (such as spark gap and hydrogen thyratron) have conventionally been used as switching devices in pulse modulator structures because of their high voltage ratings and considerably low rising times. However, they also suffer from serious drawbacks such as, their low efficiency, reliability and repetition rate, and also their short life span. Being bulky, heavy and expensive are the other disadvantages associated with these devices. Recently developed solid-state switching technology is an appropriate substitution for these switching devices due to the benefits they bring to the pulse supplies. Besides being compact, efficient, reasonable and reliable, and having a long life span, their high frequency switching skill allows repetitive operation of pulsed power supply. The main concerns in using solid-state transistors are the voltage rating and the rising time of available switches that, in some cases, cannot satisfy the application’s requirements. However, there are several power electronics configurations and techniques that make solid-state utilisation feasible for high voltage pulse generation. Therefore, the design and development of novel methods and topologies with higher efficiency and flexibility for pulsed power generators have been considered as the main scope of this research work. This aim is pursued through several innovative proposals that can be classified under the following two principal objectives. • To innovate and develop novel solid-state based topologies for pulsed power generation • To improve available technologies that have the potential to accommodate solid-state technology by revising, reconfiguring and adjusting their structure and control algorithms. The quest to distinguish novel topologies for a proper pulsed power production was begun with a deep and through review of conventional pulse generators and useful power electronics topologies. As a result of this study, it appears that efficiency and flexibility are the most significant demands of plasma applications that have not been met by state-of-the-art methods. Many solid-state based configurations were considered and simulated in order to evaluate their potential to be utilised in the pulsed power area. Parts of this literature review are documented in Chapter 1 of this thesis. Current source topologies demonstrate valuable advantages in supplying the loads with capacitive characteristics such as plasma applications. To investigate the influence of switching transients associated with solid-state devices on rise time of pulses, simulation based studies have been undertaken. A variable current source is considered to pump different current levels to a capacitive load, and it was evident that dissimilar dv/dts are produced at the output. Thereby, transient effects on pulse rising time are denied regarding the evidence acquired from this examination. A detailed report of this study is given in Chapter 6 of this thesis. This study inspired the design of a solid-state based topology that take advantage of both current and voltage sources. A series of switch-resistor-capacitor units at the output splits the produced voltage to lower levels, so it can be shared by the switches. A smart but complicated switching strategy is also designed to discharge the residual energy after each supply cycle. To prevent reverse power flow and to reduce the complexity of the control algorithm in this system, the resistors in common paths of units are substituted with diode rectifiers (switch-diode-capacitor). This modification not only gives the feasibility of stopping the load supply process to the supplier at any stage (and consequently saving energy), but also enables the converter to operate in a two-stroke mode with asymmetrical capacitors. The components’ determination and exchanging energy calculations are accomplished with respect to application specifications and demands. Both topologies were simply modelled and simulation studies have been carried out with the simplified models. Experimental assessments were also executed on implemented hardware and the approaches verified the initial analysis. Reports on details of both converters are thoroughly discussed in Chapters 2 and 3 of the thesis. Conventional MGs have been recently modified to use solid-state transistors (i.e. Insulated gate bipolar transistors) instead of magnetic/gas switching devices. Resistive insulators previously used in their structures are substituted by diode rectifiers to adjust MGs for a proper voltage sharing. However, despite utilizing solid-state technology in MGs configurations, further design and control amendments can still be made to achieve an improved performance with fewer components. Considering a number of charging techniques, resonant phenomenon is adopted in a proposal to charge the capacitors. In addition to charging the capacitors at twice the input voltage, triggering switches at the moment at which the conducted current through switches is zero significantly reduces the switching losses. Another configuration is also introduced in this research for Marx topology based on commutation circuits that use a current source to charge the capacitors. According to this design, diode-capacitor units, each including two Marx stages, are connected in cascade through solid-state devices and aggregate the voltages across the capacitors to produce a high voltage pulse. The polarity of voltage across one capacitor in each unit is reversed in an intermediate mode by connecting the commutation circuit to the capacitor. The insulation of input side from load side is provided in this topology by disconnecting the load from the current source during the supply process. Furthermore, the number of required fast switching devices in both designs is reduced to half of the number used in a conventional MG; they are replaced with slower switches (such as Thyristors) that need simpler driving modules. In addition, the contributing switches in discharging paths are decreased to half; this decrease leads to a reduction in conduction losses. Associated models are simulated, and hardware tests are performed to verify the validity of proposed topologies. Chapters 4, 5 and 7 of the thesis present all relevant analysis and approaches according to these topologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Backgrounds Whether suicide in China has significant seasonal variations is unclear. The aim of this study is to examine the seasonality of suicide in Shandong China and to assess the associations of suicide seasonality with gender, residence, age and methods of suicide. Methods Three types of tests (Chi-square, Edwards' T and Roger's Log method) were used to detect the seasonality of the suicide data extracted from the official mortality data of Shandong Disease Surveillance Point (DSP) system. Peak/low ratios (PLRs) and 95% confidence intervals (CIs) were calculated to indicate the magnitude of seasonality. Results A statistically significant seasonality with a single peak in suicide rates in spring and early summer, and a dip in winter was observed, which remained relatively consistent over years. Regardless of gender, suicide seasonality was more pronounced in rural areas, younger age groups and for non-violent methods, in particular, self-poisoning by pesticide. Conclusions There are statistically significant seasonal variations of completed suicide for both men and women in Shandong, China. Differences exist between residence (urban/rural), age groups and suicide methods. Results appear to support a sociological explanation of suicide seasonality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The widespread development of Decision Support System (DSS) in construction indicate that the evaluation of software become more important than before. However, it is identified that most research in construction discipline did not attempt to assess its usability. Therefore, little is known about the approach on how to properly evaluate a DSS for specific problem. In this paper, we present a practical framework that can be guidance for DSS evaluation. It focuses on how to evaluate software that is dedicatedly designed for consultant selection problem. The framework features two main components i.e. Sub-system Validation and Face Validation. Two case studies of consultant selection at Malaysian Department of Irrigation and Drainage were integrated in this framework. Some inter-disciplinary area such as Software Engineering, Human Computer Interaction (HCI) and Construction Project Management underpinned the discussion of the paper. It is anticipated that this work can foster better DSS development and quality decision making that accurately meet the client’s expectation and needs

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper illustrates the damage identification and condition assessment of a three story bookshelf structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). A major obstacle of using measured frequency response function data is a large size input variables to ANNs. This problem is overcome by applying a data reduction technique called principal component analysis (PCA). In the proposed procedure, ANNs with their powerful pattern recognition and classification ability were used to extract damage information such as damage locations and severities from measured FRFs. Therefore, simple neural network models are developed, trained by Back Propagation (BP), to associate the FRFs with the damage or undamaged locations and severity of the damage of the structure. Finally, the effectiveness of the proposed method is illustrated and validated by using the real data provided by the Los Alamos National Laboratory, USA. The illustrated results show that the PCA based artificial Neural Network method is suitable and effective for damage identification and condition assessment of building structures. In addition, it is clearly demonstrated that the accuracy of proposed damage detection method can also be improved by increasing number of baseline datasets and number of principal components of the baseline dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, very few attempts have been made to explore the structure damage with noise polluted data which is unavoidable effect in real world. The measurement data are contaminated by noise because of test environment as well as electronic devices and this noise tend to give error results with structural damage identification methods. Therefore it is important to investigate a method which can perform better with noise polluted data. This paper introduces a new damage index using principal component analysis (PCA) for damage detection of building structures being able to accept noise polluted frequency response functions (FRFs) as input. The FRF data are obtained from the function datagen of MATLAB program which is available on the web site of the IASC-ASCE (International Association for Structural Control– American Society of Civil Engineers) Structural Health Monitoring (SHM) Task Group. The proposed method involves a five-stage process: calculation of FRFs, calculation of damage index values using proposed algorithm, development of the artificial neural networks and introducing damage indices as input parameters and damage detection of the structure. This paper briefly describes the methodology and the results obtained in detecting damage in all six cases of the benchmark study with different noise levels. The proposed method is applied to a benchmark problem sponsored by the IASC-ASCE Task Group on Structural Health Monitoring, which was developed in order to facilitate the comparison of various damage identification methods. The illustrated results show that the PCA-based algorithm is effective for structural health monitoring with noise polluted FRFs which is of common occurrence when dealing with industrial structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The demands of taller structures are becoming imperative almost everywhere in the world in addition to the challenges of material and labor cost, project time line etc. This paper conducted a study keeping in view the challenging nature of high-rise construction with no generic rules for deflection minimizations and frequency control. The effects of cyclonic wind and provision of outriggers on 28-storey, 42-storey and 57-storey are examined in this paper and certain conclusions are made which would pave way for researchers to conduct further study in this particular area of civil engineering. The results show that plan dimensions have vital impacts on structural heights. Increase of height while keeping the plan dimensions same, leads to the reduction in the lateral rigidity. To achieve required stiffness increase of bracings sizes as well as introduction of additional lateral resisting system such as belt truss and outriggers is required.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Language-use has proven to be the most complex and complicating of all Internet features, yet people and institutions invest enormously in language and crosslanguage features because they are fundamental to the success of the Internet’s past, present and future. The thesis takes into focus the developments of the latter – features that facilitate and signify linking between or across languages – both in their historical and current contexts. In the theoretical analysis, the conceptual platform of inter-language linking is developed to both accommodate efforts towards a new social complexity model for the co-evolution of languages and language content, as well as to create an open analytical space for language and cross-language related features of the Internet and beyond. The practiced uses of inter-language linking have changed over the last decades. Before and during the first years of the WWW, mechanisms of inter-language linking were at best important elements used to create new institutional or content arrangements, but on a large scale they were just insignificant. This has changed with the emergence of the WWW and its development into a web in which content in different languages co-evolve. The thesis traces the inter-language linking mechanisms that facilitated these dynamic changes by analysing what these linking mechanisms are, how their historical as well as current contexts can be understood and what kinds of cultural-economic innovation they enable and impede. The study discusses this alongside four empirical cases of bilingual or multilingual media use, ranging from television and web services for languages of smaller populations, to large-scale, multiple languages involving web ventures by the British Broadcasting Corporation, the Special Broadcasting Service Australia, Wikipedia and Google. To sum up, the thesis introduces the concepts of ‘inter-language linking’ and the ‘lateral web’ to model the social complexity and co-evolution of languages online. The resulting model reconsiders existing social complexity models in that it is the first that can explain the emergence of large-scale, networked co-evolution of languages and language content facilitated by the Internet and the WWW. Finally, the thesis argues that the Internet enables an open space for language and crosslanguage related features and investigates how far this process is facilitated by (1) amateurs and (2) human-algorithmic interaction cultures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi-user single antenna multiple-input multiple-output orthogonal frequency division multiplexing (MUSA-MIMO-OFDM) is a promising technology to improve the spectrum efficiency of fixed wireless broadband access systems in rural areas. This letter investigates the capacity of MUSA-MIMO-OFDM uplink channel by theoretical, simulation, and empirical approaches considering up to six users. We propose an empirical capacity formula suitable for rural areas. Characteristics of channel capacity temporal variations and their relationship with the wind speed, observed in a rural area, are also presented in this letter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Urban stormwater quality is multifaceted and the use of a limited number of factors to represent catchment characteristics may not be adequate to explain the complexity of water quality response to a rainfall event or site-to-site differences in stormwater quality modelling. This paper presents the outcomes of a research study which investigated the adequacy of using land use and impervious area fraction only, to represent catchment characteristics in urban stormwater quality modelling. The research outcomes confirmed the inadequacy of the use of these two parameters alone to represent urban catchment characteristics in stormwater quality prediction. Urban form also needs to be taken into consideration as it was found have an important impact on stormwater quality by influencing pollutant generation, build-up and wash-off. Urban form refers to characteristics related to an urban development such as road layout, spatial distribution of urban areas and urban design features.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The accuracy of measurement of mechanical properties of a material using instrumented nanoindentation at extremely small penetration depths heavily relies on the determination of the contact area of the indenter. Our experiments have demonstrated that the conventional area function could lead to a significant error when the contact depth was below 40. nm, due to the singularity in the first derivation of the function in this region and thus, the resultant unreasonable sharp peak on the function curve. In this paper, we proposed a new area function that was used to calculate the contact area for the indentations where the contact depths varied from 10 to 40. nm. The experimental results have shown that the new area function has produced better results than the conventional function. © 2011 Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper illustrates robust fixed order power oscillation damper design for mitigating power systems oscillations. From implementation and tuning point of view, such low and fixed structure is common practice for most practical applications, including power systems. However, conventional techniques of optimal and robust control theory cannot handle the constraint of fixed-order as it is, in general, impossible to ensure a target closed-loop transfer function by a controller of any given order. This paper deals with the problem of synthesizing or designing a feedback controller of dynamic order for a linear time-invariant plant for a fixed plant, as well as for an uncertain family of plants containing parameter uncertainty, so that stability, robust stability and robust performance are attained. The desired closed-loop specifications considered here are given in terms of a target performance vector representing a desired closed-loop design. The performance of the designed controller is validated through non-linear simulations for a range of contingencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Knowledge has been recognised as a powerful yet intangible asset, which is difficult to manage. This is especially true in a project environment where there is the potential to repeat mistakes, rather than learn from previous experiences. The literature in the project management field has recognised the importance of knowledge sharing (KS) within and between projects. However, studies in that field focus primarily on KS mechanisms including lessons learned (LL) and post project reviews as the source of knowledge for future projects, and only some preliminary research has been carried out on the aspects of project management offices (PMOs) and organisational culture (OC) in KS. This study undertook to investigate KS behaviours in an inter-project context, with a particular emphasis on the role of trust, OC and a range of knowledge sharing mechanisms (KSM) in achieving successful inter-project knowledge sharing (I-PKS). An extensive literature search resulted in the development of an I-PKS Framework, which defined the scope of the research and shaped its initial design. The literature review indicated that existing research relating to the three factors of OC, trust and KSM remains inadequate in its ability to fully explain the role of these contextual factors. In particular, the literature review identified these areas of interest: (1) the conflicting answers to some of the major questions related to KSM, (2) the limited empirical research on the role of different trust dimensions, (3) limited empirical evidence of the role of OC in KS, and (4) the insufficient research on KS in an inter-project context. The resulting Framework comprised the three main factors including: OC, trust and KSM, demonstrating a more integrated view of KS in the inter-project context. Accordingly, the aim of this research was to examine the relationships between these three factors and KS by investigating behaviours related to KS from the project managers‘ (PMs‘) perspective. In order to achieve the aim, this research sought to answer the following research questions: 1. How does organisational culture influence inter-project knowledge sharing? 2. How does the existence of three forms of trust — (i) ability, (ii) benevolence and (iii) integrity — influence inter-project knowledge sharing? 3. How can different knowledge sharing mechanisms (relational, project management tools and process, and technology) improve inter-project knowledge sharing behaviours? 4. How do the relationships between these three factors of organisational culture, trust and knowledge sharing mechanisms improve inter-project knowledge sharing? a. What are the relationships between the factors? b. What is the best fit for given cases to ensure more effective inter-project knowledge sharing? Using multiple case studies, this research was designed to build propositions emerging from cross-case data analysis. The four cases were chosen on the basis of theoretical sampling. All cases were large project-based organisations (PBOs), with a strong matrix-type structure, as per the typology proposed by the Project Management Body of Knowledge (PMBoK) (2008). Data were collected from project management departments of the respective organisations. A range of analytical techniques were used to deal with the data including pattern matching logic and explanation building analysis, complemented by the use of NVivo for data coding and management. Propositions generated at the end of the analyses were further compared with the extant literature, and practical implications based on the data and literature were suggested in order to improve I-PKS. Findings from this research conclude that OC, trust, and KSM contribute to inter-project knowledge sharing, and suggest the existence of relationships between these factors. In view of that, this research identified the relationships between different trust dimensions, suggesting that integrity trust reinforces the relationship between ability trust and knowledge sharing. Furthermore, this research demonstrated that characteristics of culture and trust interact to reinforce preferences for mechanisms of knowledge sharing. This means that cultures that facilitate characteristics of Clan type are more likely to result in trusting relationships, hence are more likely to use organic sources of knowledge for both tacit and explicit knowledge exchange. In contrast, cultures that are empirically driven, based on control, efficiency, and measures (characteristics of Hierarchy and Market types) display tendency to develop trust primarily in ability of non-organic sources, and therefore use these sources to share mainly explicit knowledge. This thesis contributes to the project management literature by providing a more integrative view of I-PKS, bringing the factors of OC, trust and KSM into the picture. A further contribution is related to the use of collaborative tools as a substitute for static LL databases and as a facilitator for tacit KS between geographically dispersed projects. This research adds to the literature on OC by providing rich empirical evidence of the relationships between OC and the willingness to share knowledge, and by providing empirical evidence that OC has an effect on trust; in doing so this research extends the theoretical propositions outlined by previous research. This study also extends the research on trust by identifying the relationships between different trust dimensions, suggesting that integrity trust reinforces the relationship between ability trust and KS. Finally, this research provides some directions for future studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to support intelligent transportation system (ITS) road safety applications such as collision avoidance, lane departure warnings and lane keeping, Global Navigation Satellite Systems (GNSS) based vehicle positioning system has to provide lane-level (0.5 to 1 m) or even in-lane-level (0.1 to 0.3 m) accurate and reliable positioning information to vehicle users. However, current vehicle navigation systems equipped with a single frequency GPS receiver can only provide road-level accuracy at 5-10 meters. The positioning accuracy can be improved to sub-meter or higher with the augmented GNSS techniques such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP) which have been traditionally used in land surveying and or in slowly moving environment. In these techniques, GNSS corrections data generated from a local or regional or global network of GNSS ground stations are broadcast to the users via various communication data links, mostly 3G cellular networks and communication satellites. This research aimed to investigate the precise positioning system performances when operating in the high mobility environments. This involves evaluation of the performances of both RTK and PPP techniques using: i) the state-of-art dual frequency GPS receiver; and ii) low-cost single frequency GNSS receiver. Additionally, this research evaluates the effectiveness of several operational strategies in reducing the load on data communication networks due to correction data transmission, which may be problematic for the future wide-area ITS services deployment. These strategies include the use of different data transmission protocols, different correction data format standards, and correction data transmission at the less-frequent interval. A series of field experiments were designed and conducted for each research task. Firstly, the performances of RTK and PPP techniques were evaluated in both static and kinematic (highway with speed exceed 80km) experiments. RTK solutions achieved the RMS precision of 0.09 to 0.2 meter accuracy in static and 0.2 to 0.3 meter in kinematic tests, while PPP reported 0.5 to 1.5 meters in static and 1 to 1.8 meter in kinematic tests by using the RTKlib software. These RMS precision values could be further improved if the better RTK and PPP algorithms are adopted. The tests results also showed that RTK may be more suitable in the lane-level accuracy vehicle positioning. The professional grade (dual frequency) and mass-market grade (single frequency) GNSS receivers were tested for their performance using RTK in static and kinematic modes. The analysis has shown that mass-market grade receivers provide the good solution continuity, although the overall positioning accuracy is worse than the professional grade receivers. In an attempt to reduce the load on data communication network, we firstly evaluate the use of different correction data format standards, namely RTCM version 2.x and RTCM version 3.0 format. A 24 hours transmission test was conducted to compare the network throughput. The results have shown that 66% of network throughput reduction can be achieved by using the newer RTCM version 3.0, comparing to the older RTCM version 2.x format. Secondly, experiments were conducted to examine the use of two data transmission protocols, TCP and UDP, for correction data transmission through the Telstra 3G cellular network. The performance of each transmission method was analysed in terms of packet transmission latency, packet dropout, packet throughput, packet retransmission rate etc. The overall network throughput and latency of UDP data transmission are 76.5% and 83.6% of TCP data transmission, while the overall accuracy of positioning solutions remains in the same level. Additionally, due to the nature of UDP transmission, it is also found that 0.17% of UDP packets were lost during the kinematic tests, but this loss doesn't lead to significant reduction of the quality of positioning results. The experimental results from the static and the kinematic field tests have also shown that the mobile network communication may be blocked for a couple of seconds, but the positioning solutions can be kept at the required accuracy level by setting of the Age of Differential. Finally, we investigate the effects of using less-frequent correction data (transmitted at 1, 5, 10, 15, 20, 30 and 60 seconds interval) on the precise positioning system. As the time interval increasing, the percentage of ambiguity fixed solutions gradually decreases, while the positioning error increases from 0.1 to 0.5 meter. The results showed the position accuracy could still be kept at the in-lane-level (0.1 to 0.3 m) when using up to 20 seconds interval correction data transmission.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The position of housing demand and supply is not consistent. The Australian situation counters the experience demonstrated in many other parts of the world in the aftermath of the Global Financial Crisis, with residential housing prices proving particularly resilient. A seemingly inexorable housing demand remains a critical issue affecting the socio-economic landscape. Underpinned by high levels of population growth fuelled by immigration, and further buoyed by sustained historically low interest rates, increasing income levels, and increased government assistance for first home buyers, this strong housing demand level ensures problems related to housing affordability continue almost unabated. A significant, but less visible factor impacting housing affordability relates to holding costs. Although only one contributor in the housing affordability matrix, the nature and extent of holding cost impact requires elucidation: for example, the computation and methodology behind the calculation of holding costs varies widely - and in some instances completely ignored. In addition, ambiguity exists in terms of the inclusion of various elements that comprise holding costs, thereby affecting the assessment of their relative contribution. Such anomalies may be explained by considering that assessment is conducted over time in an ever-changing environment. A strong relationship with opportunity cost - in turn dependant inter alia upon prevailing inflation and / or interest rates - adds further complexity. By extending research in the general area of housing affordability, this thesis seeks to provide a detailed investigation of those elements related to holding costs specifically in the context of midsized (i.e. between 15-200 lots) greenfield residential property developments in South East Queensland. With the dimensions of holding costs and their influence over housing affordability determined, the null hypothesis H0 that holding costs are not passed on can be addressed. Arriving at these conclusions involves the development of robust economic and econometric models which seek to clarify the componentry impacts of holding cost elements. An explanatory sequential design research methodology has been adopted, whereby the compilation and analysis of quantitative data and the development of an economic model is informed by the subsequent collection and analysis of primarily qualitative data derived from surveying development related organisations. Ultimately, there are significant policy implications in relation to the framework used in Australian jurisdictions that promote, retain, or otherwise maximise, the opportunities for affordable housing.