260 resultados para weakly n-hyponormal operators
Resumo:
In this article, we investigate experimentally whether people search optimally and how price promotions influence search behaviour. We implement a sequential search task with exogenous price dispersion in a baseline treatment and introduce discounts in two experimental treatments. We find that search behaviour is roughly consistent with optimal search but also observe some discount biases. If subjects do not know in advance where discounts are offered, the purchase probability is increased by 19 percentage points in shops with discounts, even after controlling for the benefit of the discount and for risk preferences. If consumers know in advance where discounts are given, then the bias is only weakly significant and much smaller (7 percentage points).
Resumo:
Evolutionary computation is an effective tool for solving optimization problems. However, its significant computational demand has limited its real-time and on-line applications, especially in embedded systems with limited computing resources, e.g., mobile robots. Heuristic methods such as the genetic algorithm (GA) based approaches have been investigated for robot path planning in dynamic environments. However, research on the simulated annealing (SA) algorithm, another popular evolutionary computation algorithm, for dynamic path planning is still limited mainly due to its high computational demand. An enhanced SA approach, which integrates two additional mathematical operators and initial path selection heuristics into the standard SA, is developed in this work for robot path planning in dynamic environments with both static and dynamic obstacles. It improves the computing performance of the standard SA significantly while giving an optimal or near-optimal robot path solution, making its real-time and on-line applications possible. Using the classic and deterministic Dijkstra algorithm as a benchmark, comprehensive case studies are carried out to demonstrate the performance of the enhanced SA and other SA algorithms in various dynamic path planning scenarios.
Resumo:
Collaboration between faculty and librarians is an important topic of discussion and research among academic librarians. These partnerships between faculty and librarians are vital for enabling students to become lifelong learners through their information literacy education. This research developed an understanding of academic collaborators by analyzing a community college faculty's teaching social networks. A teaching social network, an original term generated in this study, is comprised of communications that influence faculty when they design and deliver their courses. The communication may be formal (e.g., through scholarly journals and professional development activities) and informal (e.g., through personal communication) through their network elements. Examples of the elements of a teaching social network may be department faculty, administration, librarians, professional development, and students. This research asked 'What is the nature of faculty's teaching social networks and what are the implications for librarians?' This study moves forward the existing research on collaboration, information literacy, and social network analysis. It provides both faculty and librarians with added insight into their existing and potential relationships. This research was undertaken using mixed methods. Social network analysis was the quantitative data collection methodology and the interview method was the qualitative technique. For the social network analysis data, a survey was sent to full-time faculty at Las Positas College, a community college, in California. The survey gathered the data and described the teaching social networks for faculty with respect to their teaching methods and content taught. Semi-structured interviews were conducted following the survey with a sub-set of survey respondents to understand why specific elements were included in their teaching social networks and to learn of ways for librarians to become an integral part of the teaching social networks. The majority of the faculty respondents were moderately influenced by the elements of their network except the majority of the potentials were weakly influenced by the elements in their network in their content taught. The elements with the most influence on both teaching methods and content taught were students, department faculty, professional development, and former graduate professors and coursework. The elements with the least influence on both aspects were public or academic librarians, and social media. The most popular roles for the elements were conversations about teaching, sharing ideas, tips for teaching, insights into teaching, suggestions for ways of teaching, and how to engage students. Librarians' weakly influenced faculty in their teaching methods and their content taught. The motivating factors for collaboration with librarians were that students learned how to research, students' research projects improved, faculty saved time by having librarians provide the instruction to students, and faculty built strong working relationships with librarians. The challenges of collaborating with librarians were inadequate teaching techniques used when librarians taught research orientations and lack of time. Ways librarians can be more integral in faculty's teaching social networks included: more workshops for faculty, more proactive interaction with faculty, and more one-on-one training sessions for faculty. Some of the recommendations for the librarians from this study were develop a strong rapport with faculty, librarians should build their services in information literacy from the point of view of the faculty instead of from the librarian perspective, use staff development funding to attend conferences and workshops to improve their teaching, develop more training sessions for faculty, increase marketing efforts of the librarian's instructional services, and seek grant opportunities to increase funding for the library. In addition, librarians and faculty should review the definitions of information literacy and move from a skills based interpretation to a learning process.
Resumo:
Although transit travel time variability is essential for understanding the deterioration of reliability, optimising transit schedule and route choice; it has not attracted enough attention from the literature. This paper proposes public transport-oriented definitions of travel time variability and explores the distributions of public transport travel time using the Transit Signal Priority data. First, definitions of public transport travel time variability are established by extending the common definitions of variability in the literature and by using route and services data of public transport vehicles. Second, the paper explores the distribution of public transport travel time. A new approach for analysing the distributions involving all transit vehicles as well as vehicles from a specific route is proposed. The Lognormal distribution is revealed as the descriptors for public transport travel time from the same route and service. The methods described in this study could be of interest for both traffic managers and transit operators for planning and managing the transit systems.
Resumo:
Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.
Resumo:
Rail operators recognize a need to increase ridership in order to improve the economic viability of rail service, and to magnify the role that rail travel plays in making cities feel liveable. This study extends previous research that used cluster analysis with a small sample of rail passengers to identify five salient perspectives of rail access (Zuniga et al, 2013). In this project stage, we used correlation techniques to determine how those perspectives would resonate with two larger study populations, including a relatively homogeneous sample of university students in Brisbane, Australia and a diverse sample of rail passengers in Melbourne, Australia. Findings from Zuniga et al. (2013) described a complex typology of current passengers that was based on respondents’ subjective attitudes and perceptions rather than socio-demographic or travel behaviour characteristics commonly used for segmentation analysis. The typology included five qualitative perspectives of rail travel. Based on the transport accessibility literature, we expected to find that perspectives from that study emphasizing physical access to rail stations would be shared by current and potential rail passengers who live further from rail stations. Other perspectives might be shared among respondents who live nearby, since the relevance of distance would be diminished. The population living nearby would thus represent an important target group for increasing ridership, since making rail travel accessible to them does not require expansion of costly infrastructure such as new lines or stations. By measuring the prevalence of each perspective in a larger respondent pool, results from this study provide insight into the typical socio-demographic and travel behaviour characteristics that correspond to each perspective of intra-urban rail travel. In several instances, our quantitative findings reinforced Zuniga et al.’s (2013) qualitative descriptions of passenger types, further validating the original research. This work may directly inform rail operators’ approach to increasing ridership through marketing and improvements to service quality and station experience. Operators in other parts of Australia and internationally may also choose to replicate the study locally, to fine-tune understanding of diverse customer bases. Developing regional and international collaboration would provide additional opportunities to evaluate and benchmark service and station amenities as they address the various access dimensions.
Resumo:
Crash statistics that include the blood alcohol concentration (BAC) of vehicle operators reveal that crash involved motorcyclists are over represented at low BACs (e.g., ≤0.05%). This riding simulator study compared riding performance and hazard response under three low dose alcohol conditions (sober, 0.02% BAC, 0.05% BAC). Forty participants (20 novice, 20 experienced) completed simulated rides in urban and rural scenarios while responding to a safety-critical peripheral detection task (PDT). Results showed a significant increase in the standard deviation of lateral position in the urban scenario and PDT reaction time in the rural scenario under 0.05% BAC compared with zero alcohol. Participants were most likely to collide with an unexpected pedestrian in the urban scenario at 0.02% BAC, with novice participants at a greater relative risk than experienced riders. Novices chose to ride faster than experienced participants in the rural scenario regardless of BAC. Not all results were significant, emphasising the complex situation of the effects of low dose BAC on riding performance, which needs further research. The results of this simulator study provide some support for a legal BAC for motorcyclists below 0.05%.
Resumo:
The Lake Wivenhoe Integrated Wireless Sensor Network is conceptually similar to traditional SCADA monitoring and control approaches. However, it is applied in an open system using wireless devices to monitor processes that affect water quality at both a high spatial and temporal frequency. This monitoring assists scientists to better understand drivers of key processes that influence water quality and provide the operators with an early warning system if below standard water enters the reservoir. Both of these aspects improve the safety and efficient delivery of drinking water to the end users.
Resumo:
The current state of knowledge in relation to first flush does not provide a clear understanding of the role of rainfall and catchment characteristics in influencing this phenomenon. This is attributed to the inconsistent findings from research studies due to the unsatisfactory selection of first flush indicators and how first flush is defined. The research study discussed in this thesis provides the outcomes of a comprehensive analysis on the influence of rainfall and catchment characteristics on first flush behaviour in residential catchments. Two sets of first flush indicators are introduced in this study. These indicators were selected such that they are representative in explaining in a systematic manner the characteristics associated with first flush. Stormwater samples and rainfall-runoff data were collected and recorded from stormwater monitoring stations established at three urban catchments at Coomera Waters, Gold Coast, Australia. In addition, historical data were also used to support the data analysis. Three water quality parameters were analysed, namely, total suspended solids (TSS), total phosphorus (TP) and total nitrogen (TN). The data analyses were primarily undertaken using multi criteria decision making methods, PROMETHEE and GAIA. Based on the data obtained, the pollutant load distribution curve (LV) was determined for the individual rainfall events and pollutant types. Accordingly, two sets of first flush indicators were derived from the curve, namely, cumulative load wash-off for every 10% of runoff volume interval (interval first flush indicators or LV) from the beginning of the event and the actual pollutant load wash-off during a 10% increment in runoff volume (section first flush indicators or P). First flush behaviour showed significant variation with pollutant types. TSS and TP showed consistent first flush behaviour. However, the dissolved fraction of TN showed significant differences to TSS and TP first flush while particulate TN showed similarities. Wash-off of TSS, TP and particulate TN during the first 10% of the runoff volume showed no influence from corresponding rainfall intensity. This was attributed to the wash-off of weakly adhered solids on the catchment surface referred to as "short term pollutants" or "weakly adhered solids" load. However, wash-off after 10% of the runoff volume showed dependency on the rainfall intensity. This is attributed to the wash-off of strongly adhered solids being exposed when the weakly adhered solids diminish. The wash-off process was also found to depend on rainfall depth at the end part of the event as the strongly adhered solids are loosened due to impact of rainfall in the earlier part of the event. Events with high intensity rainfall bursts after 70% of the runoff volume did not demonstrate first flush behaviour. This suggests that rainfall pattern plays a critical role in the occurrence of first flush. Rainfall intensity (with respect to the rest of the event) that produces 10% to 20% runoff volume play an important role in defining the magnitude of the first flush. Events can demonstrate high magnitude first flush when the rainfall intensity occurring between 10% and 20% of the runoff volume is comparatively high while low rainfall intensities during this period produces low magnitude first flush. For events with first flush, the phenomenon is clearly visible up to 40% of the runoff volume. This contradicts the common definition that first flush only exists, if for example, 80% of the pollutant mass is transported in the first 30% of runoff volume. First flush behaviour for TN is different compared to TSS and TP. Apart from rainfall characteristics, the composition and the availability of TN on the catchment also play an important role in first flush. The analysis confirmed that events with low rainfall intensity can produce high magnitude first flush for the dissolved fraction of TN, while high rainfall intensity produce low dissolved TN first flush. This is attributed to the source limiting behaviour of dissolved TN wash-off where there is high wash-off during the initial part of a rainfall event irrespective of the intensity. However, for particulate TN, the influence of rainfall intensity on first flush characteristics is similar to TSS and TP. The data analysis also confirmed that first flush can occur as high magnitude first flush, low magnitude first flush or non existence of first flush. Investigation of the influence of catchment characteristics on first flush found that the key factors that influence the phenomenon are the location of the pollutant source, spatial distribution of the pervious and impervious surfaces in the catchment, drainage network layout and slope of the catchment. This confirms that first flush phenomenon cannot be evaluated based on a single or a limited set of parameters as a number of catchment characteristics should be taken into account. Catchments where the pollutant source is located close to the outlet, a high fraction of road surfaces, short travel time to the outlet, with steep slopes can produce high wash-off load during the first 50% of the runoff volume. Rainfall characteristics have a comparatively dominant impact on the wash-off process compared to the catchment characteristics. In addition, the pollutant characteristics also should be taken into account in designing stormwater treatment systems due to different wash-off behaviour. Analysis outcomes confirmed that there is a high TSS load during the first 20% of the runoff volume followed by TN which can extend up to 30% of the runoff volume. In contrast, high TP load can exist during the initial and at the end part of a rainfall event. This is related to the composition of TP available for the wash-off.
Resumo:
In Australia, the proportion of the population aged 65 years and over reached 13.5% in 2010 and is expected to increase steadily to around 20% by the year 2056 [Australia Bureau of Statistics (ABS), 2010], creating what has been regarded as a looming crisis in how to house and care for older people. As a viable accommodation option, the retirement village is widely accepted as a means of promoting and enhancing independence, choice and quality of life for older people. Recent research by Barker (2010) indicates that the current and potential residents of retirement villages are generally very conscious of resource consumption and would like their residences and community to be more sustainable. The aim of this study was to understand the perception of older people toward sustainability ideas and identify the sustainable practices involved in retirement villages to improve the wellbeing of residents. Multiple research methods, including content analysis, questionnaire survey, interviews and case studies were conducted for the research purpose. The results indicate that most retirement village residents understand and recognize the importance of sustainability in their lifestyle. However, their sustainability requirements need to be supported and enhanced by the provision of affordable sustainability features. Additionally, many retirement village developers and operators realize the importance of providing a sustainable retirement community for their residents, and that a sustainable retirement village (that is environmental-friendly, affordable, and improves social engagement) can be achieved through the consideration of project planning, design, construction, and operations throughout the project life cycle. The clear shift from healthcare to lifestyle-focused services in the recent development of retirement villages together with the increasing number of aged people moving into retirement villages (Simpson and Cheney, 2007) has raised awareness of the need for the retirement village industry to provide a sustainable community for older people to improve their life quality after retirement. This is the first critical study of sustainable development in the retirement village industry and its potential in addressing the housing needs of older people, providing a contribution towards improving the life quality of older people and with direct and immediate significance to the community as a whole.
Resumo:
The complex [1,2-bis(di-tert-butylphosphanyl)ethane-[kappa]2P,P']diiodidonickel(II), [NiI2(C18H40P2] or (dtbpe-[kappa]2P)NiI2, [dtbpe is 1,2-bis(di-tert-butylphosphanyl)ethane], is bright blue-green in the solid state and in solution, but, contrary to the structure predicted for a blue or green nickel(II) bis(phosphine) complex, it is found to be close to square planar in the solid state. The solution structure is deduced to be similar, because the optical spectra measured in solution and in the solid state contain similar absorptions. In solution at room temperature, no 31P{1H} NMR resonance is observed, but the very small solid-state magnetic moment at temperatures down to 4 K indicates that the weak paramagnetism of this nickel(II) complex can be ascribed to temperature independent paramagnetism, and that the complex has no unpaired electrons. The red [1,2-bis(di-tert-butylphosphanyl)ethane-[kappa]2P,P']dichloridonickel(II), [NiCl2(C18H40P2] or (dtbpe-[kappa]2P)NiCl2, is very close to square planar and very weakly paramagnetic in the solid state and in solution, while the maroon [1,2-bis(di-tert-butylphosphanyl)ethane-[kappa]2P,P']dibromidonickel(II), [NiBr2(C18H40P2] or (dtbpe-[kappa]2P)NiBr2, is isostructural with the diiodide in the solid state, and displays paramagnetism intermediate between that of the dichloride and the diiodide in the solid state and in solution. Density functional calculations demonstrate that distortion from an ideal square plane for these complexes occurs on a flat potential energy surface. The calculations reproduce the observed structures and colours, and explain the trends observed for these and similar complexes. Although theoretical investigation identified magnetic-dipole-allowed excitations that are characteristic for temperature-independent paramagnetism (TIP), theory predicts the molecules to be diamagnetic.
Resumo:
Consensual stereotypes of some groups are relatively accurate, whereas others are not. Previous work suggesting that national character stereotypes are inaccurate has been criticized on several grounds. In this article we (a) provide arguments for the validity of assessed national mean trait levels as criteria for evaluating stereotype accuracy and (b) report new data on national character in 26 cultures from descriptions (N= 3323) of the typical male or female adolescent, adult, or old person in each. The average ratings were internally consistent and converged with independent stereotypes of the typical culture member, but were weakly related to objective assessments of personality. We argue that this conclusion is consistent with the broader literature on the inaccuracy of national character stereotypes
Resumo:
Diesel particulate matter (DPM), in particular, has been likened in a somewhat inflammatory manner to be the ‘next asbestos’. From the business change perspective, there are three areas holding the industry back from fully engaging with the issue: 1. There is no real feedback loop in any operational sense to assess the impact of investment or application of controls to manage diesel emissions. 2. DPM are getting ever smaller and more numerous, but there is no practical way of measuring them to regulate them in the field. Mass, the current basis of regulation, is becoming less and less relevant. 3. Diesel emissions management is generally wholly viewed as a cost, yet there are significant areas of benefit available from good management. This paper discusses a feedback approach to address these three areas to move the industry forward. The six main areas of benefit from providing a feedback loop by continuously monitoring diesel emissions have been identified: 1. Condition-based maintenance. Emissions change instantaneously if engine condition changes. 2. Operator performance. An operator can use a lot more fuel for little incremental work output through poor technique or discipline. 3. Vehicle utilisation. Operating hours achieved and ratios of idling to under power affect the proportion of emissions produced with no economic value. 4. Fuel efficiency. This allows visibility into other contributing configuration and environmental factors for the vehicle. 5. Emission rates. This allows scope to directly address the required ratio of ventilation to diesel emissions. 6. Total carbon emissions - for NGER-type reporting requirements, calculating the emissions individually from each vehicle rather than just reporting on fuel delivered to a site.
Resumo:
An area of property valuation that has attracted less attention than other property markets over the past 20 years has been the mining and extractive industries. These operations can range from small operators on leased or private land to multinational companies. Although there are a number of national mining standards that indicate the type of valuation methods that can be adopted for this asset class, these standards do not specify how or when these methods are best suited to particular mine operations. The RICS guidance notes and the draft IVSC guidance notes also advise the various valuations methods that can be used to value mining properties; but, again they do not specify what methods should be applied where and when. One of the methods supported by these standards and guidelines is the market approach. This paper will carry out an analysis of all mine, extractive industry and waste disposal sites sale transactions in Queensland Australia, a major world mining centre, to determine if a market valuation approach such as direct comparison is actually suitable for the valuation of a mine or extractive industry. The analysis will cover the period 1984 to 2011 and covers sale transactions for minerals, petroleum and gas, waste disposal sites, clay, sand and stone. Based on this analysis, the suitability of direct comparison for valuation purposes in this property sector will be tested.
Resumo:
This chapter gives an overview of the smartphone app economy and its various constituent ecosystems. It examines the role of the app store model and the proliferation of mobile apps in the shift from value chains controlled by network operators and handset manufacturers, to value networks – or ecosystems – focused around operating systems and apps. It outlines some of the benefits and disadvantages for developers of the app store model for remuneration and distribution. The chapter concludes with a discussion of recent research on the size and employment effects of the app economy.