892 resultados para Night-time economy
Resumo:
The main objective of this paper is to detail the development of a feasible hardware design based on Evolutionary Algorithms (EAs) to determine flight path planning for Unmanned Aerial Vehicles (UAVs) navigating terrain with obstacle boundaries. The design architecture includes the hardware implementation of Light Detection And Ranging (LiDAR) terrain and EA population memories within the hardware, as well as the EA search and evaluation algorithms used in the optimizing stage of path planning. A synthesisable Very-high-speed integrated circuit Hardware Description Language (VHDL) implementation of the design was developed, for realisation on a Field Programmable Gate Array (FPGA) platform. Simulation results show significant speedup compared with an equivalent software implementation written in C++, suggesting that the present approach is well suited for UAV real-time path planning applications.
Resumo:
Many cities worldwide face the prospect of major transformation as the world moves towards a global information order. In this new era, urban economies are being radically altered by dynamic processes of economic and spatial restructuring. The result is the creation of ‘informational cities’ or its new and more popular name, ‘knowledge cities’. For the last two centuries, social production had been primarily understood and shaped by neo-classical economic thought that recognized only three factors of production: land, labor and capital. Knowledge, education, and intellectual capacity were secondary, if not incidental, factors. Human capital was assumed to be either embedded in labor or just one of numerous categories of capital. In the last decades, it has become apparent that knowledge is sufficiently important to deserve recognition as a fourth factor of production. Knowledge and information and the social and technological settings for their production and communication are now seen as keys to development and economic prosperity. The rise of knowledge-based opportunity has, in many cases, been accompanied by a concomitant decline in traditional industrial activity. The replacement of physical commodity production by more abstract forms of production (e.g. information, ideas, and knowledge) has, however paradoxically, reinforced the importance of central places and led to the formation of knowledge cities. Knowledge is produced, marketed and exchanged mainly in cities. Therefore, knowledge cities aim to assist decision-makers in making their cities compatible with the knowledge economy and thus able to compete with other cities. Knowledge cities enable their citizens to foster knowledge creation, knowledge exchange and innovation. They also encourage the continuous creation, sharing, evaluation, renewal and update of knowledge. To compete nationally and internationally, cities need knowledge infrastructures (e.g. universities, research and development institutes); a concentration of well-educated people; technological, mainly electronic, infrastructure; and connections to the global economy (e.g. international companies and finance institutions for trade and investment). Moreover, they must possess the people and things necessary for the production of knowledge and, as importantly, function as breeding grounds for talent and innovation. The economy of a knowledge city creates high value-added products using research, technology, and brainpower. Private and the public sectors value knowledge, spend money on its discovery and dissemination and, ultimately, harness it to create goods and services. Although many cities call themselves knowledge cities, currently, only a few cities around the world (e.g., Barcelona, Delft, Dublin, Montreal, Munich, and Stockholm) have earned that label. Many other cities aspire to the status of knowledge city through urban development programs that target knowledge-based urban development. Examples include Copenhagen, Dubai, Manchester, Melbourne, Monterrey, Singapore, and Shanghai. Knowledge-Based Urban Development To date, the development of most knowledge cities has proceeded organically as a dependent and derivative effect of global market forces. Urban and regional planning has responded slowly, and sometimes not at all, to the challenges and the opportunities of the knowledge city. That is changing, however. Knowledge-based urban development potentially brings both economic prosperity and a sustainable socio-spatial order. Its goal is to produce and circulate abstract work. The globalization of the world in the last decades of the twentieth century was a dialectical process. On one hand, as the tyranny of distance was eroded, economic networks of production and consumption were constituted at a global scale. At the same time, spatial proximity remained as important as ever, if not more so, for knowledge-based urban development. Mediated by information and communication technology, personal contact, and the medium of tacit knowledge, organizational and institutional interactions are still closely associated with spatial proximity. The clustering of knowledge production is essential for fostering innovation and wealth creation. The social benefits of knowledge-based urban development extend beyond aggregate economic growth. On the one hand is the possibility of a particularly resilient form of urban development secured in a network of connections anchored at local, national, and global coordinates. On the other hand, quality of place and life, defined by the level of public service (e.g. health and education) and by the conservation and development of the cultural, aesthetic and ecological values give cities their character and attract or repel the creative class of knowledge workers, is a prerequisite for successful knowledge-based urban development. The goal is a secure economy in a human setting: in short, smart growth or sustainable urban development.
Resumo:
Aims: Driving Under the Influence (DUI) enforcement can be a broad screening mechanism for alcohol and other drug problems. The current response to DUI is focused on using mechanical means to prevent inebriated persons from driving, with little attention the underlying substance abuse problems. ---------- Methods: This is a secondary analysis of an administrative dataset of over 345,000 individuals who entered Texas substance abuse treatment between 2005 and 2008. Of these, 36,372 were either on DUI probation, referred to treatment by probation, or had a DUI arrest in the past year. The DUI offenders were compared on demographic characteristics, substance use patterns, and levels of impairment with those who were not DUI offenders and first DUI offenders were compared with those with more than one past-year offense. T tests and chi square tests were used to determine significance. ---------- Results: DUI offenders were more likely to be employed, to have a problem with alcohol, to report more past-year arrests for any offense, to be older, and to have used alcohol and drugs longer than the non-DUI clients who reported higher ASI scores and were more likely to use daily. Those with one past-year DUI arrest were more likely to have problems with drugs other than alcohol and were less impaired than those with two or more arrests based on their ASI scores and daily use. Non-DUI clients reported higher levels of mood disorders than DUIs but there was no difference in their diagnosis of anxiety. Similar findings were found between those with one or multiple DUI arrests. ----------Conclusion: Although first-time DUIs were not as impaired as non-DUI clients, their levels of impairment were sufficient to cause treatment. Screening and brief intervention at arrest for all DUI offenders and treatment in combination with abstinence monitoring could decrease future recidivism.
Resumo:
Autonomous underwater vehicles (AUVs) are increasingly used, both in military and civilian applications. These vehicles are limited mainly by the intelligence we give them and the life of their batteries. Research is active to extend vehicle autonomy in both aspects. Our intent is to give the vehicle the ability to adapt its behavior under different mission scenarios (emergency maneuvers versus long duration monitoring). This involves a search for optimal trajectories minimizing time, energy or a combination of both. Despite some success stories in AUV control, optimal control is still a very underdeveloped area. Adaptive control research has contributed to cost minimization problems, but vehicle design has been the driving force for advancement in optimal control research. We look to advance the development of optimal control theory by expanding the motions along which AUVs travel. Traditionally, AUVs have taken the role of performing the long data gathering mission in the open ocean with little to no interaction with their surroundings, MacIver et al. (2004). The AUV is used to find the shipwreck, and the remotely operated vehicle (ROV) handles the exploration up close. AUV mission profiles of this sort are best suited through the use of a torpedo shaped AUV, Bertram and Alvarez (2006), since straight lines and minimal (0 deg - 30 deg) angular displacements are all that are necessary to perform the transects and grid lines for these applications. However, the torpedo shape AUV lacks the ability to perform low-speed maneuvers in cluttered environments, such as autonomous exploration close to the seabed and around obstacles, MacIver et al. (2004). Thus, we consider an agile vehicle capable of movement in six degrees of freedom without any preference of direction.
Resumo:
This paper discusses control strategies adapted for practical implementation and efficient motion of underwater vehicles. These trajectories are piecewise constant thrust arcs with few actuator switchings. We provide the numerical algorithm which computes the time efficient trajectories parameterized by the switching times. We discuss both the theoretical analysis and experimental implementation results.
Resumo:
In this paper, we are concerned with the practical implementation of time optimal numerical techniques on underwater vehicles. We briefly introduce the model of underwater vehicle we consider and present the parameters for the test bed ODIN (Omni-Directional Intelligent Navigator). Then we explain the numerical method used to obtain time optimal trajectories with a structure suitable for the implementation. We follow this with a discussion on the modifications to be made considering the characteristics of ODIN. Finally, we illustrate our computations with some experimental results.
Resumo:
The Internet presents a constantly evolving frontier for criminology and policing, especially in relation to online predators – paedophiles operating within the Internet for safer access to children, child pornography and networking opportunities with other online predators. The goals of this qualitative study are to undertake behavioural research – identify personality types and archetypes of online predators and compare and contrast them with behavioural profiles and other psychological research on offline paedophiles and sex offenders. It is also an endeavour to gather intelligence on the technological utilisation of online predators and conduct observational research on the social structures of online predator communities. These goals were achieved through the covert monitoring and logging of public activity within four Internet Relay Chat(rooms) (IRC) themed around child sexual abuse and which were located on the Undernet network. Five days of monitoring was conducted on these four chatrooms between Wednesday 1 to Sunday 5 April 2009; this raw data was collated and analysed. The analysis identified four personality types – the gentleman predator, the sadist, the businessman and the pretender – and eight archetypes consisting of the groomers, dealers, negotiators, roleplayers, networkers, chat requestors, posters and travellers. The characteristics and traits of these personality types and archetypes, which were extracted from the literature dealing with offline paedophiles and sex offenders, are detailed and contrasted against the online sexual predators identified within the chatrooms, revealing many similarities and interesting differences particularly with the businessman and pretender personality types. These personality types and archetypes were illustrated by selecting users who displayed the appropriate characteristics and tracking them through the four chatrooms, revealing intelligence data on the use of proxies servers – especially via the Tor software – and other security strategies such as Undernet’s host masking service. Name and age changes, which is used as a potential sexual grooming tactic was also revealed through the use of Analyst’s Notebook software and information on ISP information revealed the likelihood that many online predators were not using any safety mechanism and relying on the anonymity of the Internet. The activities of these online predators were analysed, especially in regards to child sexual grooming and the ‘posting’ of child pornography, which revealed a few of the methods in which online predators utilised new Internet technologies to sexually groom and abuse children – using technologies such as instant messengers, webcams and microphones – as well as store and disseminate illegal materials on image sharing websites and peer-to-peer software such as Gigatribe. Analysis of the social structures of the chatrooms was also carried out and the community functions and characteristics of each chatroom explored. The findings of this research have indicated several opportunities for further research. As a result of this research, recommendations are given on policy, prevention and response strategies with regards to online predators.
Resumo:
This paper presents a method of voice activity detection (VAD) suitable for high noise scenarios, based on the fusion of two complementary systems. The first system uses a proposed non-Gaussianity score (NGS) feature based on normal probability testing. The second system employs a histogram distance score (HDS) feature that detects changes in the signal through conducting a template-based similarity measure between adjacent frames. The decision outputs by the two systems are then merged using an open-by-reconstruction fusion stage. Accuracy of the proposed method was compared to several baseline VAD methods on a database created using real recordings of a variety of high-noise environments.
Resumo:
Graduated licensing schemes have been found to reduce the crash risk of young novice drivers, but there is less evidence of their success with novice motorcycle riders. This study examined the riding experience of a sample of Australian learner-riders to establish the extent and variety of their riding practice during the learner stage. Riders completed an anonymous questionnaire at a compulsory rider-training course for the licensing test. The majority of participants were male (81%) with an average age of 33 years. They worked full time (81%), held an unrestricted driver's license (81%), and owned the motorcycle that they rode (79%). These riders had held their learner's license for an average of 6 months. On average, they rode 6.4 h/week. By the time they attempted the rider-licensing test, they had ridden a total of 101 h. Their total hours of on-road practice were comparable to those of learner-drivers at the same stage of licensing, but they had less experience in adverse or challenging road conditions. A substantial proportion had little or no experience of riding in the rain (57%), at night (36%), in heavy traffic (22%), on winding rural roads (52%), or on high-speed roads (51%). These findings highlight the differences in the learning processes between unsupervised novice motorcycle riders and supervised novice drivers. Further research is necessary to clarify whether specifying the conditions under which riders should practice during the graduated licensing process would likely reduce or increase their crash risk.
Resumo:
The question posed in this chapter is: To what extent does current education theory and practice prepare graduates for the creative economy? We first define what we mean by the term creative economy, explain why we think it is a significant point of focus, derive its key features, describe the human capital requirements of these features, and then discuss whether current education theory and practice are producing these human capital requirements. The term creative economy can be critiqued as a shibboleth, but as a high level metaphor, it nevertheless has value in directing us away from certain sorts of economic activity and toward other kinds. Much economic activity is in no way creative. If I have a monopoly on some valued resource, I do not need to be creative. Other forms of economic activity are intensely creative. If I have no valued resources, I must create something that is valued. At its simplest and yet most profound, the idea of a creative economy suggests a capacity to compete based on engaging in a gainful activity that is different from everyone else’s, rather than pursuing the same endeavor more competitively than everyone else. The ability to differentiate on novelty is key to the concept of creative economy and key to our analysis of education for this economy. Therefore, we follow Potts and Cunningham (2008, p. 18) and Potts, Cunningham, Hartley, and Ormerod (2008) in their discussion of the economic significance of the creative industries and see the creative economy not as a sector but as a set of economic processes that act on the economy as a whole to invigorate innovation based growth. We see the creative economy as suffused with all industry rather than as a sector in its own right. These economic processes are essentially concerned with the production of new ideas that ultimately become new products, service, industry sectors, or, in some cases, process or product innovations in older sectors. Therefore, our starting point is that modern economies depend on innovation, and we see the core of innovation as new knowledge of some kind. We commence with some observations about innovation.
Resumo:
Real‐time kinematic (RTK) GPS techniques have been extensively developed for applications including surveying, structural monitoring, and machine automation. Limitations of the existing RTK techniques that hinder their applications for geodynamics purposes are twofold: (1) the achievable RTK accuracy is on the level of a few centimeters and the uncertainty of vertical component is 1.5–2 times worse than those of horizontal components and (2) the RTK position uncertainty grows in proportional to the base‐torover distances. The key limiting factor behind the problems is the significant effect of residual tropospheric errors on the positioning solutions, especially on the highly correlated height component. This paper develops the geometry‐specified troposphere decorrelation strategy to achieve the subcentimeter kinematic positioning accuracy in all three components. The key is to set up a relative zenith tropospheric delay (RZTD) parameter to absorb the residual tropospheric effects and to solve the established model as an ill‐posed problem using the regularization method. In order to compute a reasonable regularization parameter to obtain an optimal regularized solution, the covariance matrix of positional parameters estimated without the RZTD parameter, which is characterized by observation geometry, is used to replace the quadratic matrix of their “true” values. As a result, the regularization parameter is adaptively computed with variation of observation geometry. The experiment results show that new method can efficiently alleviate the model’s ill condition and stabilize the solution from a single data epoch. Compared to the results from the conventional least squares method, the new method can improve the longrange RTK solution precision from several centimeters to the subcentimeter in all components. More significantly, the precision of the height component is even higher. Several geosciences applications that require subcentimeter real‐time solutions can largely benefit from the proposed approach, such as monitoring of earthquakes and large dams in real‐time, high‐precision GPS leveling and refinement of the vertical datum. In addition, the high‐resolution RZTD solutions can contribute to effective recovery of tropospheric slant path delays in order to establish a 4‐D troposphere tomography.
Resumo:
Provisional supervision (PS) is Hong Kong’s proposed new corporate rescue procedure. In essence, it is a procedure for the preparation by a professional, usually an accountant or a solicitor, of a proposal for a voluntary arrangement, supported by a moratorium. There should be little court involvement in the process and it is anticipated that the costs and delays of the process would be less than alternate, currently available procedures. This article will retrace some of the key events and issues arising from the numerous policy and legislative debates about PS in Hong Kong. At present the Hong Kong government is in the midst of drafting a new Bill on corporate rescue procedure to be introduced to the HKSAR Legislative Council. This will be the third attempt. Setting aside the controversies and the content of this new effort by the Hong Kong administration, the Global Financial Crisis in 2008 has signalled to the international policy and business community, free markets alone cannot be an effective regulatory mechanism. Having legal safeguards and clear rules to regulate procedures and conduct of market participants are imperative to avoid future financial meltdowns.
Resumo:
The city of Scottsdale Arizona implemented the first fixed photo Speed Enforcement camera demonstration Program (SEP) on a US freeway in 2006. A comprehensive before-and-after analysis of the impact of the SEP on safety revealed significant reductions in crash frequency and severity, which indicates that the SEP is a promising countermeasure for improving safety. However, there is often a trade off between safety and mobility when safety investments are considered. As a result, identifying safety countermeasures that both improve safety and reduce Travel Time Variability (TTV) is a desirable goal for traffic safety engineers. This paper reports on the analysis of the mobility impacts of the SEP by simulating the traffic network with and without the SEP, calibrated to real world conditions. The simulation results show that the SEP decreased the TTV: the risk of unreliable travel was at least 23% higher in the ‘without SEP’ scenario than in the ‘with SEP’ scenario. In addition, the total Travel Time Savings (TTS) from the SEP was estimated to be at least ‘569 vehicle-hours/year.’ Consequently, the SEP is an efficient countermeasure not only for reducing crashes but also for improving mobility through TTS and reduced TTV.
Resumo:
This paper presents a method of voice activity detection (VAD) for high noise scenarios, using a noise robust voiced speech detection feature. The developed method is based on the fusion of two systems. The first system utilises the maximum peak of the normalised time-domain autocorrelation function (MaxPeak). The second zone system uses a novel combination of cross-correlation and zero-crossing rate of the normalised autocorrelation to approximate a measure of signal pitch and periodicity (CrossCorr) that is hypothesised to be noise robust. The score outputs by the two systems are then merged using weighted sum fusion to create the proposed autocorrelation zero-crossing rate (AZR) VAD. Accuracy of AZR was compared to state of the art and standardised VAD methods and was shown to outperform the best performing system with an average relative improvement of 24.8% in half-total error rate (HTER) on the QUT-NOISE-TIMIT database created using real recordings from high-noise environments.