214 resultados para Space Sciences
Resumo:
Current knowledge about the relationship between transport disadvantage and activity space size is limited to urban areas, and as a result, very little is known to date about this link in a rural context. In addition, although research has identified transport disadvantaged groups based on their size of activity spaces, these studies have, however, not empirically explained such differences and the result is often a poor identification of the problems facing disadvantaged groups. Research has shown that transport disadvantage varies over time. The static nature of analysis using the activity space concept in previous research studies has lacked the ability to identify transport disadvantage in time. Activity space is a dynamic concept; and therefore possesses a great potential in capturing temporal variations in behaviour and access opportunities. This research derives measures of the size and fullness of activity spaces for 157 individuals for weekdays, weekends, and for a week using weekly activity-travel diary data from three case study areas located in rural Northern Ireland. Four focus groups were also conducted in order to triangulate the quantitative findings and to explain the differences between different socio-spatial groups. The findings of this research show that despite having a smaller sized activity space, individuals were not disadvantaged because they were able to access their required activities locally. Car-ownership was found to be an important life line in rural areas. Temporal disaggregation of the data reveals that this is true only on weekends due to a lack of public transport services. In addition, despite activity spaces being at a similar size, the fullness of activity spaces of low-income individuals was found to be significantly lower compared to their high-income counterparts. Focus group data shows that financial constraint, poor connections both between public transport services and between transport routes and opportunities forced individuals to participate in activities located along the main transport corridors.
Resumo:
This report provides an evaluation of the Capalaba Youth Space.The evaluation included elements of process and impact evaluation and used a participatory action research approach informed by engagement processes, focus groups, a community survey, interviews and secondary analysis of existing data.
Resumo:
Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.
Resumo:
The traditional Vector Space Model (VSM) is not able to represent both the structure and the content of XML documents. This paper introduces a novel method of representing XML documents in a Tensor Space Model (TSM) and then utilizing it for clustering. Empirical analysis shows that the proposed method is scalable for large-sized datasets; as well, the factorized matrices produced from the proposed method help to improve the quality of clusters through the enriched document representation of both structure and content information.
Resumo:
Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.
Resumo:
The Acquisition of Land Act 1967 (Qld) (‘the Act’) deals with the acquisition of land by the State for public purposes and provides for compensation. The issue that arose for determination in Sorrento Medical Service Pty Ltd v Chief Executive, Dept of Main Roads [2007] QCA 73 was whether the appellant was entitled to claim compensation under the Act in respect of land resumed by the Main Roads Department over which the appellant had an exclusive contractual licence for car parking spaces for use in association with a medical centre leased by the appellant. At first instance, it was held by the Land Court that the appellant was not entitled to compensation for the resumption of the car parking spaces. The basis for this decision by the Land Court was that a right to compensation only exists where resumption has taken some proprietary interest of the claimant in the land. Following an appeal to the Land Appeal Court being dismissed, the appellant instituted the present appeal to the Queensland Court of Appeal (McMurdo P, Holmes JA and Chesterman J).
Resumo:
Using six kinds of lattice types (4×4 ,5×5 , and6×6 square lattices;3×3×3 cubic lattice; and2+3+4+3+2 and4+5+6+5+4 triangular lattices), three different size alphabets (HP ,HNUP , and 20 letters), and two energy functions, the designability of proteinstructures is calculated based on random samplings of structures and common biased sampling (CBS) of proteinsequence space. Then three quantities stability (average energy gap),foldability, and partnum of the structure, which are defined to elucidate the designability, are calculated. The authors find that whatever the type of lattice, alphabet size, and energy function used, there will be an emergence of highly designable (preferred) structure. For all cases considered, the local interactions reduce degeneracy and make the designability higher. The designability is sensitive to the lattice type, alphabet size, energy function, and sampling method of the sequence space. Compared with the random sampling method, both the CBS and the Metropolis Monte Carlo sampling methods make the designability higher. The correlation coefficients between the designability, stability, and foldability are mostly larger than 0.5, which demonstrate that they have strong correlation relationship. But the correlation relationship between the designability and the partnum is not so strong because the partnum is independent of the energy. The results are useful in practical use of the designability principle, such as to predict the proteintertiary structure.
Resumo:
An initialisation process is a key component in modern stream cipher design. A well-designed initialisation process should ensure that each key-IV pair generates a different key stream. In this paper, we analyse two ciphers, A5/1 and Mixer, for which this does not happen due to state convergence. We show how the state convergence problem occurs and estimate the effective key-space in each case.
Resumo:
The role of ions in the production of atmospheric particles has gained wide interest due to their profound impact on climate. Away from anthropogenic sources, molecules are ionized by alpha radiation from radon exhaled from the ground and cosmic gamma radiation from space. These molecular ions quickly form into ‘cluster ions’, typically smaller than about 1.5 nm. Using our measurements and the published literature, we present evidence to show that cluster ion concentrations in forest areas are consistently higher than outside. Since alpha radiation cannot penetrate more than a few centimetres of soil, radon present deep in the ground cannot directly contribute to the measured cluster ion concentrations. We propose an additional mechanism whereby radon, which is water soluble, is brought up by trees and plants through the uptake of groundwater and released into the atmosphere by transpiration. We estimate that, in a forest comprising eucalyptus trees spaced 4m apart, approximately 28% of the radon in the air may be released by transpiration. Considering that 24% of the earth’s land area is still covered in forests; these findings have potentially important implications for atmospheric aerosol formation and climate.
Resumo:
The emergence of mobile and ubiquitous computing has created what is referred to as a hybrid space – a virtual layer of digital information and interaction opportunities that sits on top and augments the physical environment. The increasing connectedness through such media, from anywhere to anybody at anytime, makes us less dependent on being physically present somewhere in particular. But, what is the role of ubiquitous computing in making physical presence at a particular place more attractive? Acknowledging historic context and identity as important attributes of place, this work embarks on a ‘global sense of place’ in which the cultural diversity, multiple identities, backgrounds, skills and experiences of people traversing a place are regarded as social assets of that place. The aim is to explore ways how physical architecture and infrastructure of a place can be mediated towards making invisible social assets visible, thus augmenting people’s situated social experience. Thereby, the focus is on embodied media, i.e. media that materialise digital information as observable and sometimes interactive parts of the physical environment hence amplify people’s real world experience, rather than substituting or moving it to virtual spaces.
Resumo:
In this paper, I show how new spaces are being prefigured for colonisation in the language of contemporary technology policy. Drawing on a corpus of 1.3 million words collected from technology policy centres throughout the world, I show the role of policy language in creating the foundations of an emergent form of political economy. The analysis is informed by principles from critical discourse analysis (CDA) and classical political economy. It foregrounds a functional aspect of language called process metaphor to show how aspects of human activity are prefigured for mass commodification by the manipulation of irrealis spaces. I also show how the fundamental element of any new political economy, the property element, is being largely ignored. The potential creation of a global space as concrete as landed property – electromagnetic spectrum – has significant ramifications for the future of social relations in any global “knowledge economy”.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.
Resumo:
In semisupervised learning (SSL), a predictive model is learn from a collection of labeled data and a typically much larger collection of unlabeled data. These paper presented a framework called multi-view point cloud regularization (MVPCR), which unifies and generalizes several semisupervised kernel methods that are based on data-dependent regularization in reproducing kernel Hilbert spaces (RKHSs). Special cases of MVPCR include coregularized least squares (CoRLS), manifold regularization (MR), and graph-based SSL. An accompanying theorem shows how to reduce any MVPCR problem to standard supervised learning with a new multi-view kernel.
Resumo:
This article examines social, cultural and technological change in the systems and economies of educational information management. Since the Sumerians first collected, organized and supervised administrative and religious records some six millennia ago, libraries have been key physical depositories and cultural signifiers in the production and mediation of social capital and power through education. To date, the textual, archival and discursive practices perpetuating libraries have remained exempt from inquiry. My aim here is to remedy this hiatus by making the library itself the terrain and object of critical analysis and investigation. The paper argues that in the three dominant communications eras—namely, oral, print and digital cultures—society’s centres of knowledge and learning have resided in the ceremony, the library and the cybrary respectively. In a broad-brush historical grid, each of these key educational institutions—the ceremony in oral culture, the library in print culture and the cybrary in digital culture—are mapped against social, cultural and technological orders pertaining to their era. Following a description of these shifts in society’s collective cultural memory, the paper then examines the question of what the development of global information systems and economies mean for schools and libraries of today, and for teachers and learners as knowledge consumers and producers?