161 resultados para AD-HOC NETWORKS
Resumo:
Millions of people with print disabilities are denied the right to read. While some important efforts have been made to convert standard books to accessible formats and create accessible repositories, these have so far only addressed this crisis in an ad hoc way. This article argues that universally designed ebook libraries have the potential of substantially enabling persons with print disabilities. As a case study of what is possible, we analyse 12 academic ebook libraries to map their levels of accessibility. The positive results from this study indicate that universally designed ebooks are more than possible; they exist. While results are positive, however, we also found that most ebook libraries have some features that frustrate full accessibility, and some ebook libraries present critical barriers for people with disabilities. Based on these findings, we consider that some combination of private pressure and public law is both possible and necessary to advance the right-to-read cause. With access improving and recent advances in international law, now is the time to push for universal design and equality.
Resumo:
This pilot project investigated the existing practices and processes of Proficient, Highly Accomplished and Lead teachers in the interpretation, analysis and implementation of National Assessment Program – Literacy and Numeracy (NAPLAN) data. A qualitative case study approach was the chosen methodology, with nine teachers across a variety of school sectors interviewed. Themes and sub-themes were identified from the participants’ interview responses revealing the ways in which Queensland teachers work with NAPLAN data. The data illuminated that generally individual schools and teachers adopted their own ways of working with data, with approaches ranging from individual/ad hoc, to hierarchical or a whole school approach. Findings also revealed that data are the responsibility of various persons from within the school hierarchy; some working with the data electronically whilst others rely on manual manipulation. Manipulation of data is used for various purposes including tracking performance, value adding and targeting programmes for specific groups of students, for example the gifted and talented. Whilst all participants had knowledge of intervention programmes and how practice could be modified, there were large inconsistencies in knowledge and skills across schools. Some see the use of data as a mechanism for accountability, whilst others mention data with regards to changing the school culture and identifying best practice. Overall, the findings showed inconsistencies in approach to focus area 5.4. Recommendations therefore include a more national approach to the use of educational data.
Resumo:
Determining the key variables of transportation disadvantage remains a great challenge as the variables are commonly selected using ad-hoc techniques. In order to identify the variables, this research develops a transportation disadvantage framework by manipulating the capability approach. Developed framework is statistically analysed using partial least square-based software to determine the framework fitness. The statistical analysis identifies mobility and socioeconomic variables that significantly influence transportation disadvantage. The research reveals the key socioeconomic variables for transportation disadvantage in the case of Brisbane, Australia as household structure, presence of dependent family member, vehicle ownership, and driving licence possession.
Resumo:
This paper describes the development and experimental evaluation of a novel vision-based Autonomous Surface Vehicle with the purpose of performing coordinated docking manoeuvres with a target, such as an Autonomous Underwater Vehicle, on the water’s surface. The system architecture integrates two small processor units; the first performs vehicle control and implements a virtual force obstacle avoidance and docking strategy, with the second performing vision-based target segmentation and tracking. Furthermore, the architecture utilises wireless sensor network technology allowing the vehicle to be observed by, and even integrated within an ad-hoc sensor network. The system performance is demonstrated through real-world experiments.
Resumo:
The Kyoto Protocol is remarkable among global multilateral environmental agreements for its efforts to depoliticize compliance. However, attempts to create autonomous, arm’s length and rule-based compliance processes with extensive reliance on putatively neutral experts were only partially realized in practice in the first commitment period from 2008 to 2012. In particular, the procedurally constrained facilitative powers vested in the Facilitative Branch were circumvented, and expert review teams (ERTs) assumed pivotal roles in compliance facilitation. The ad hoc diplomatic and facilitative practices engaged in by these small teams of technical experts raise questions about the reliability and consistency of the compliance process. For the future operation of the Kyoto compliance system, it is suggested that ERTs should be confined to more technical and procedural roles, in line with their expertise. There would then be greater scope for the Facilitative Branch to assume a more comprehensive facilitative role, safeguarded by due process guarantees, in accordance with its mandate. However, if – as appears likely – the future compliance trajectories under the United Nations Framework Convention on Climate Change will include a significant role for ERTs without oversight by the Compliance Committee, it is important to develop appropriate procedural safeguards that reflect and shape the various technical and political roles these teams currently play.
Resumo:
The notion of being sure that you have completely eradicated an invasive species is fanciful because of imperfect detection and persistent seed banks. Eradication is commonly declared either on an ad hoc basis, on notions of seed bank longevity, or on setting arbitrary thresholds of 1% or 5% confidence that the species is not present. Rather than declaring eradication at some arbitrary level of confidence, we take an economic approach in which we stop looking when the expected costs outweigh the expected benefits. We develop theory that determines the number of years of absent surveys required to minimize the net expected cost. Given detection of a species is imperfect, the optimal stopping time is a trade-off between the cost of continued surveying and the cost of escape and damage if eradication is declared too soon. A simple rule of thumb compares well to the exact optimal solution using stochastic dynamic programming. Application of the approach to the eradication programme of Helenium amarum reveals that the actual stopping time was a precautionary one given the ranges for each parameter. © 2006 Blackwell Publishing Ltd/CNRS.
Resumo:
Most standard algorithms for prediction with expert advice depend on a parameter called the learning rate. This learning rate needs to be large enough to fit the data well, but small enough to prevent overfitting. For the exponential weights algorithm, a sequence of prior work has established theoretical guarantees for higher and higher data-dependent tunings of the learning rate, which allow for increasingly aggressive learning. But in practice such theoretical tunings often still perform worse (as measured by their regret) than ad hoc tuning with an even higher learning rate. To close the gap between theory and practice we introduce an approach to learn the learning rate. Up to a factor that is at most (poly)logarithmic in the number of experts and the inverse of the learning rate, our method performs as well as if we would know the empirically best learning rate from a large range that includes both conservative small values and values that are much higher than those for which formal guarantees were previously available. Our method employs a grid of learning rates, yet runs in linear time regardless of the size of the grid.
Resumo:
It’s commonly assumed that psychiatric violence is motivated by delusions, but here the concept of a reversed impetus is explored, to understand whether delusions are formed as ad-hoc or post-hoc rationalizations of behaviour or in advance of the actus reus. The reflexive violence model proposes that perceptual stimuli has motivational power and this may trigger unwanted actions and hallucinations. The model is based on the theory of ecological perception, where opportunities enabled by an object are cues to act. As an apple triggers a desire to eat, a gun triggers a desire to shoot. These affordances (as they are called) are part of the perceptual apparatus, they allow the direct recognition of objects – and in emergencies they enable the fastest possible reactions. Even under normal circumstances, the presence of a weapon will trigger inhibited violent impulses. The presence of a victim will also, but under normal circumstances, these affordances don’t become violent because negative action impulses are totally inhibited, whereas in psychotic illness, negative action impulses are treated as emergencies and bypass frontal inhibitory circuits. What would have been object recognition becomes a blind automatic action. A range of mental illnesses can cause inhibition to be bypassed. At its most innocuous, this causes both simple hallucinations (where the motivational power of an object is misattributed). But ecological perception may have the power to trigger serious violence also –a kind that’s devoid of motives or planning and is often shrouded in amnesia or post-rational delusions.
Resumo:
The proliferation of the web presents an unsolved problem of automatically analyzing billions of pages of natural language. We introduce a scalable algorithm that clusters hundreds of millions of web pages into hundreds of thousands of clusters. It does this on a single mid-range machine using efficient algorithms and compressed document representations. It is applied to two web-scale crawls covering tens of terabytes. ClueWeb09 and ClueWeb12 contain 500 and 733 million web pages and were clustered into 500,000 to 700,000 clusters. To the best of our knowledge, such fine grained clustering has not been previously demonstrated. Previous approaches clustered a sample that limits the maximum number of discoverable clusters. The proposed EM-tree algorithm uses the entire collection in clustering and produces several orders of magnitude more clusters than the existing algorithms. Fine grained clustering is necessary for meaningful clustering in massive collections where the number of distinct topics grows linearly with collection size. These fine-grained clusters show an improved cluster quality when assessed with two novel evaluations using ad hoc search relevance judgments and spam classifications for external validation. These evaluations solve the problem of assessing the quality of clusters where categorical labeling is unavailable and unfeasible.
Resumo:
‘Complexity’ is a term that is increasingly prevalent in conversations about building capacity for 21st Century professional engineers. Society is grappling with the urgent and challenging reality of accommodating seven billion people, meeting needs and innovating lifestyle improvements in ways that do not destroy atmospheric, biological and oceanic systems critical to life. Over the last two decades in particular, engineering educators have been active in attempting to build capacity amongst professionals to deliver ‘sustainable development’ in this rapidly changing global context. However curriculum literature clearly points to a lack of significant progress, with efforts best described as ad hoc and highly varied. Given the limited timeframes for action to curb environmental degradation proposed by scientists and intergovernmental agencies, the authors of this paper propose it is imperative that curriculum renewal towards education for sustainable development proceeds rapidly, systemically, and in a transformational manner. Within this context, the paper discusses the need to consider a multiple track approach to building capacity for 21st Century engineering, including priorities and timeframes for undergraduate and postgraduate curriculum renewal. The paper begins with a contextual discussion of the term complexity and how it relates to life in the 21st Century. The authors then present a whole of system approach for planning and implementing rapid curriculum renewal that addresses the critical roles of several generations of engineering professionals over the next three decades. The paper concludes with observations regarding engaging with this approach in the context of emerging accreditation requirements and existing curriculum renewal frameworks.
Resumo:
Background Foot dorsiflexion plays an essential role in both controlling balance and human gait. Electromyography (EMG) and sonomyography (SMG) can provide information on several aspects of muscle function. The aim was to establish the relationship between the EMG and SMG variables during isotonic contractions of foot dorsiflexors. Methods Twenty-seven healthy young adults performed the foot dorsiflexion test on a device designed ad hoc. EMG variables were maximum peak and area under the curve. Muscular architecture variables were muscle thickness and pennation angle. Descriptive statistical analysis, inferential analysis and a multivariate linear regression model were carried out. The confidence level was established with a statistically significant p-value of less than 0.05. Results The correlation between EMG variables and SMG variables was r = 0.462 (p < 0.05). The linear regression model to the dependent variable “peak normalized tibialis anterior (TA)” from the independent variables “pennation angle and thickness”, was significant (p = 0.002) with an explained variance of R2 = 0.693 and SEE = 0.16. Conclusions There is a significant relationship and degree of contribution between EMG and SMG variables during isotonic contractions of the TA muscle. Our results suggest that EMG and SMG can be feasible tools for monitoring and assessment of foot dorsiflexors. TA muscle parameterization and assessment is relevant in order to know that increased strength accelerates the recovery of lower limb injuries.
Resumo:
For the first decade of its existence, the concept of citizen journalism has described an approach which was seen as a broadening of the participant base in journalistic processes, but still involved only a comparatively small subset of overall society – for the most part, citizen journalists were news enthusiasts and “political junkies” (Coleman, 2006) who, as some exasperated professional journalists put it, “wouldn’t get a job at a real newspaper” (The Australian, 2007), but nonetheless followed many of the same journalistic principles. The investment – if not of money, then at least of time and effort – involved in setting up a blog or participating in a citizen journalism Website remained substantial enough to prevent the majority of Internet users from engaging in citizen journalist activities to any significant extent; what emerged in the form of news blogs and citizen journalism sites was a new online elite which for some time challenged the hegemony of the existing journalistic elite, but gradually also merged with it. The mass adoption of next-generation social media platforms such as Facebook and Twitter, however, has led to the emergence of a new wave of quasi-journalistic user activities which now much more closely resemble the “random acts of journalism” which JD Lasica envisaged in 2003. Social media are not exclusively or even predominantly used for citizen journalism; instead, citizen journalism is now simply a by-product of user communities engaging in exchanges about the topics which interest them, or tracking emerging stories and events as they happen. Such platforms – and especially Twitter with its system of ad hoc hashtags that enable the rapid exchange of information about issues of interest – provide spaces for users to come together to “work the story” through a process of collaborative gatewatching (Bruns, 2005), content curation, and information evaluation which takes place in real time and brings together everyday users, domain experts, journalists, and potentially even the subjects of the story themselves. Compared to the spaces of news blogs and citizen journalism sites, but also of conventional online news Websites, which are controlled by their respective operators and inherently position user engagement as a secondary activity to content publication, these social media spaces are centred around user interaction, providing a third-party space in which everyday as well as institutional users, laypeople as well as experts converge without being able to control the exchange. Drawing on a number of recent examples, this article will argue that this results in a new dynamic of interaction and enables the emergence of a more broadly-based, decentralised, second wave of citizen engagement in journalistic processes.
Resumo:
The cognitive benefits of biophilia have been studied quite extensively, dating as far back as the 1980s, while studies into economic benefits are still in their infancy. Recent research has attempted to quantify a number of economic returns on biophilic elements; however knowledge in this field is still ad hoc and highly variable. Many studies acknowledge difficulties in discerning information such as certain social and aesthetic benefits. While conceptual understanding of the physiological and psychological effects of exposure to nature is widely recognised and understood, this has not yet been systematically translated into monetary terms. It is clear from the literature that further research is needed to both obtain data on the economics of biophilic urbanism, and to create the business case for biophilic urbanism. With this in mind, this paper will briefly highlight biophilic urbanism referencing previous work in the field. It will then explore a number of emergent gaps in the measurable economic understanding of these elements and suggest opportunities for engaging decision makers in the business case for biophilic urbanism. The paper concludes with recommendations for moving forward through targeted research and economic analysis.
Resumo:
In 2007 the National Framework for Energy Efficiency provided funding for the first survey of energy efficiency education across all Australian universities teaching engineering education. The survey asked the question, ‘What is the state of education for energy efficiency in Australian engineering education?’. There was an excellent response to the survey, with 48 course responses from lecturers across 27 universities from every state and territory in Australia, and 260 student responses from 18 courses across 8 universities from all 6 states. It is concluded from the survey findings that the state of education for energy efficiency in Australian engineering education is currently highly variable and ad hoc across universities and engineering disciplines.
Resumo:
Creative and ad-hoc work often involves non-digital artifacts, such as whiteboards and post-it notes. The preferred method of brainstorming and idea development, while facilitating work among collocated participants, makes it particularly tricky to involve remote participants, not even mentioning cases where live social involvement is required and the number and location of remote participants can be vast. Our work has originally focused on large distributed teams in business entities. Vast majority of teams in large organizations are distributed teams. Our team of corporate researchers decided to identify state of the art technologies that could facilitate the scenarios mentioned above. This paper is an account of a research project in the area of enterprise collaboration, with a strong focus on the aspects of human computer interaction in mixed mode environments, especially in areas of collaboration where computers still play a secondary role. It is describing a currently running corporate research project. In this paper we signal the potential use of the technology in situation, where community involvement is either required or desirable. The goal of the paper is to initiate a discussion on the use of technologies, initially designed as supporting enterprise collaboration, in situation requiring community engagement. In other words, it is a contribution of technically focused research exploring the uses of the technology in areas such as social engagement and community involvement. © 2012 IEEE.